Test Report: Docker_Linux_crio 20598

                    
                      63c1754226199ce281e4ac8e931674d5ef457043:2025-04-07:39038
                    
                

Test fail (1/331)

Order failed test Duration
36 TestAddons/parallel/Ingress 154.66
x
+
TestAddons/parallel/Ingress (154.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-665428 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-665428 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-665428 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9fee3c92-a14d-492f-aff0-38d10f61255d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9fee3c92-a14d-492f-aff0-38d10f61255d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003156474s
I0407 12:59:56.195301  873820 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-665428 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.624707963s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-665428 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-665428
helpers_test.go:235: (dbg) docker inspect addons-665428:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02",
	        "Created": "2025-04-07T12:56:53.927954349Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 875753,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-07T12:56:53.964426678Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:037bd1b5a0f63899880a74b20d0e40b693fd199ade4ed9b883be5ed5726d15a6",
	        "ResolvConfPath": "/var/lib/docker/containers/06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02/hostname",
	        "HostsPath": "/var/lib/docker/containers/06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02/hosts",
	        "LogPath": "/var/lib/docker/containers/06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02/06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02-json.log",
	        "Name": "/addons-665428",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-665428:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-665428",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02",
	                "LowerDir": "/var/lib/docker/overlay2/023ae18ade9c0b1e4af1aa93f9453f4c0477aaea446a2ce5b45154b80c17f650-init/diff:/var/lib/docker/overlay2/2f5a47ab021f26692cb6998078d8bcbc6a5a3ab67692e345d3ddf18b0edf8bb5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/023ae18ade9c0b1e4af1aa93f9453f4c0477aaea446a2ce5b45154b80c17f650/merged",
	                "UpperDir": "/var/lib/docker/overlay2/023ae18ade9c0b1e4af1aa93f9453f4c0477aaea446a2ce5b45154b80c17f650/diff",
	                "WorkDir": "/var/lib/docker/overlay2/023ae18ade9c0b1e4af1aa93f9453f4c0477aaea446a2ce5b45154b80c17f650/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-665428",
	                "Source": "/var/lib/docker/volumes/addons-665428/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-665428",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-665428",
	                "name.minikube.sigs.k8s.io": "addons-665428",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74570620ea7deb941c3c2c36ea63c935dff31ea69551b1cd8558f09a9dcb3f9c",
	            "SandboxKey": "/var/run/docker/netns/74570620ea7d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33289"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33290"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33293"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33291"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33292"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-665428": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:9a:49:5f:80:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c25c4b2bb83bfa5329b75ede96d17dd5f965d1c6fa57539e4ebcde9ffbc696f8",
	                    "EndpointID": "e17c7a83c729c86883c74678cd4827df5c8b71814692104230aeeef1c2793dec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-665428",
	                        "06c16868ee72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-665428 -n addons-665428
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-665428 logs -n 25: (1.246183547s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-988404 | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC |                     |
	|         | download-docker-988404                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-988404                                                                   | download-docker-988404 | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-720301   | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC |                     |
	|         | binary-mirror-720301                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32847                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-720301                                                                     | binary-mirror-720301   | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:56 UTC |
	| addons  | enable dashboard -p                                                                         | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC |                     |
	|         | addons-665428                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC |                     |
	|         | addons-665428                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-665428 --wait=true                                                                | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-665428 addons disable                                                                | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-665428 addons disable                                                                | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-665428 addons disable                                                                | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-665428 addons disable                                                                | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-665428 addons                                                                        | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-665428 addons                                                                        | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | -p addons-665428                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-665428 ip                                                                            | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	| addons  | addons-665428 addons disable                                                                | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-665428 addons                                                                        | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-665428 ssh cat                                                                       | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | /opt/local-path-provisioner/pvc-c512f8be-60e7-4823-8659-de8045a39758_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-665428 addons disable                                                                | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-665428 addons disable                                                                | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-665428 addons                                                                        | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-665428 ssh curl -s                                                                   | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-665428 addons                                                                        | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 13:00 UTC | 07 Apr 25 13:00 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-665428 addons                                                                        | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 13:00 UTC | 07 Apr 25 13:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-665428 ip                                                                            | addons-665428          | jenkins | v1.35.0 | 07 Apr 25 13:02 UTC | 07 Apr 25 13:02 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:56:30
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:56:30.702445  875154 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:56:30.702559  875154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:30.702564  875154 out.go:358] Setting ErrFile to fd 2...
	I0407 12:56:30.702568  875154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:30.702745  875154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	I0407 12:56:30.703416  875154 out.go:352] Setting JSON to false
	I0407 12:56:30.704517  875154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":16734,"bootTime":1744013857,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:56:30.704667  875154 start.go:139] virtualization: kvm guest
	I0407 12:56:30.706796  875154 out.go:177] * [addons-665428] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:56:30.708415  875154 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:56:30.708407  875154 notify.go:220] Checking for updates...
	I0407 12:56:30.711074  875154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:56:30.712683  875154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	I0407 12:56:30.714115  875154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	I0407 12:56:30.715548  875154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:56:30.717170  875154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:56:30.718834  875154 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:56:30.744772  875154 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:56:30.744941  875154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:56:30.794517  875154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2025-04-07 12:56:30.785065684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:56:30.794642  875154 docker.go:318] overlay module found
	I0407 12:56:30.797287  875154 out.go:177] * Using the docker driver based on user configuration
	I0407 12:56:30.799016  875154 start.go:297] selected driver: docker
	I0407 12:56:30.799043  875154 start.go:901] validating driver "docker" against <nil>
	I0407 12:56:30.799057  875154 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:56:30.800048  875154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:56:30.848254  875154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2025-04-07 12:56:30.839071097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:56:30.848428  875154 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:56:30.848657  875154 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:56:30.850428  875154 out.go:177] * Using Docker driver with root privileges
	I0407 12:56:30.851583  875154 cni.go:84] Creating CNI manager for ""
	I0407 12:56:30.851658  875154 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0407 12:56:30.851674  875154 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0407 12:56:30.851757  875154 start.go:340] cluster config:
	{Name:addons-665428 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-665428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:56:30.853208  875154 out.go:177] * Starting "addons-665428" primary control-plane node in "addons-665428" cluster
	I0407 12:56:30.854497  875154 cache.go:121] Beginning downloading kic base image for docker with crio
	I0407 12:56:30.855938  875154 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
	I0407 12:56:30.857211  875154 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:56:30.857276  875154 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 12:56:30.857291  875154 cache.go:56] Caching tarball of preloaded images
	I0407 12:56:30.857342  875154 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 12:56:30.857435  875154 preload.go:172] Found /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 12:56:30.857450  875154 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 12:56:30.857840  875154 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/config.json ...
	I0407 12:56:30.857877  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/config.json: {Name:mk1632c78e59740b19dc87d94bf78bbae4b1afb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:30.874944  875154 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:56:30.875103  875154 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory
	I0407 12:56:30.875124  875154 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory, skipping pull
	I0407 12:56:30.875130  875154 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in cache, skipping pull
	I0407 12:56:30.875141  875154 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 as a tarball
	I0407 12:56:30.875152  875154 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 from local cache
	I0407 12:56:43.540945  875154 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 from cached tarball
	I0407 12:56:43.540999  875154 cache.go:230] Successfully downloaded all kic artifacts
	I0407 12:56:43.541037  875154 start.go:360] acquireMachinesLock for addons-665428: {Name:mk0d1208b37c8f5f796a5d547d740d644b761805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:56:43.541144  875154 start.go:364] duration metric: took 84.684µs to acquireMachinesLock for "addons-665428"
	I0407 12:56:43.541168  875154 start.go:93] Provisioning new machine with config: &{Name:addons-665428 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-665428 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 12:56:43.541239  875154 start.go:125] createHost starting for "" (driver="docker")
	I0407 12:56:43.543452  875154 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0407 12:56:43.543747  875154 start.go:159] libmachine.API.Create for "addons-665428" (driver="docker")
	I0407 12:56:43.543792  875154 client.go:168] LocalClient.Create starting
	I0407 12:56:43.543914  875154 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20598-866963/.minikube/certs/ca.pem
	I0407 12:56:43.795939  875154 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20598-866963/.minikube/certs/cert.pem
	I0407 12:56:44.014200  875154 cli_runner.go:164] Run: docker network inspect addons-665428 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0407 12:56:44.030596  875154 cli_runner.go:211] docker network inspect addons-665428 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0407 12:56:44.030691  875154 network_create.go:284] running [docker network inspect addons-665428] to gather additional debugging logs...
	I0407 12:56:44.030714  875154 cli_runner.go:164] Run: docker network inspect addons-665428
	W0407 12:56:44.047087  875154 cli_runner.go:211] docker network inspect addons-665428 returned with exit code 1
	I0407 12:56:44.047133  875154 network_create.go:287] error running [docker network inspect addons-665428]: docker network inspect addons-665428: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-665428 not found
	I0407 12:56:44.047148  875154 network_create.go:289] output of [docker network inspect addons-665428]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-665428 not found
	
	** /stderr **
	I0407 12:56:44.047313  875154 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 12:56:44.064530  875154 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d85eb0}
	I0407 12:56:44.064581  875154 network_create.go:124] attempt to create docker network addons-665428 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0407 12:56:44.064639  875154 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-665428 addons-665428
	I0407 12:56:44.114556  875154 network_create.go:108] docker network addons-665428 192.168.49.0/24 created
	I0407 12:56:44.114600  875154 kic.go:121] calculated static IP "192.168.49.2" for the "addons-665428" container
	I0407 12:56:44.114659  875154 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0407 12:56:44.130903  875154 cli_runner.go:164] Run: docker volume create addons-665428 --label name.minikube.sigs.k8s.io=addons-665428 --label created_by.minikube.sigs.k8s.io=true
	I0407 12:56:44.149965  875154 oci.go:103] Successfully created a docker volume addons-665428
	I0407 12:56:44.150051  875154 cli_runner.go:164] Run: docker run --rm --name addons-665428-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-665428 --entrypoint /usr/bin/test -v addons-665428:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib
	I0407 12:56:49.096952  875154 cli_runner.go:217] Completed: docker run --rm --name addons-665428-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-665428 --entrypoint /usr/bin/test -v addons-665428:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib: (4.946848163s)
	I0407 12:56:49.097008  875154 oci.go:107] Successfully prepared a docker volume addons-665428
	I0407 12:56:49.097050  875154 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:56:49.097076  875154 kic.go:194] Starting extracting preloaded images to volume ...
	I0407 12:56:49.097131  875154 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-665428:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir
	I0407 12:56:53.861069  875154 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-665428:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir: (4.763899039s)
	I0407 12:56:53.861115  875154 kic.go:203] duration metric: took 4.764033181s to extract preloaded images to volume ...
	W0407 12:56:53.861276  875154 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0407 12:56:53.861433  875154 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0407 12:56:53.912129  875154 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-665428 --name addons-665428 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-665428 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-665428 --network addons-665428 --ip 192.168.49.2 --volume addons-665428:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727
	I0407 12:56:54.182659  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Running}}
	I0407 12:56:54.200763  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:56:54.219293  875154 cli_runner.go:164] Run: docker exec addons-665428 stat /var/lib/dpkg/alternatives/iptables
	I0407 12:56:54.263900  875154 oci.go:144] the created container "addons-665428" has a running status.
	I0407 12:56:54.263936  875154 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa...
	I0407 12:56:54.494519  875154 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0407 12:56:54.523646  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:56:54.549383  875154 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0407 12:56:54.549413  875154 kic_runner.go:114] Args: [docker exec --privileged addons-665428 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0407 12:56:54.607901  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:56:54.627093  875154 machine.go:93] provisionDockerMachine start ...
	I0407 12:56:54.627190  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:56:54.653217  875154 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:54.653546  875154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33289 <nil> <nil>}
	I0407 12:56:54.653568  875154 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 12:56:54.849314  875154 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-665428
	
	I0407 12:56:54.849355  875154 ubuntu.go:169] provisioning hostname "addons-665428"
	I0407 12:56:54.849435  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:56:54.868816  875154 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:54.869034  875154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33289 <nil> <nil>}
	I0407 12:56:54.869049  875154 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-665428 && echo "addons-665428" | sudo tee /etc/hostname
	I0407 12:56:55.005449  875154 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-665428
	
	I0407 12:56:55.005543  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:56:55.023158  875154 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:55.023379  875154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33289 <nil> <nil>}
	I0407 12:56:55.023395  875154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-665428' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-665428/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-665428' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 12:56:55.149794  875154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:56:55.149903  875154 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20598-866963/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-866963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-866963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-866963/.minikube}
	I0407 12:56:55.149954  875154 ubuntu.go:177] setting up certificates
	I0407 12:56:55.149971  875154 provision.go:84] configureAuth start
	I0407 12:56:55.150027  875154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-665428
	I0407 12:56:55.167950  875154 provision.go:143] copyHostCerts
	I0407 12:56:55.168018  875154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-866963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-866963/.minikube/ca.pem (1082 bytes)
	I0407 12:56:55.168144  875154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-866963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-866963/.minikube/cert.pem (1123 bytes)
	I0407 12:56:55.168213  875154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-866963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-866963/.minikube/key.pem (1679 bytes)
	I0407 12:56:55.168274  875154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-866963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-866963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-866963/.minikube/certs/ca-key.pem org=jenkins.addons-665428 san=[127.0.0.1 192.168.49.2 addons-665428 localhost minikube]
	I0407 12:56:55.312043  875154 provision.go:177] copyRemoteCerts
	I0407 12:56:55.312104  875154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 12:56:55.312139  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:56:55.330225  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:56:55.422254  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0407 12:56:55.446129  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 12:56:55.469406  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 12:56:55.491550  875154 provision.go:87] duration metric: took 341.56299ms to configureAuth
	I0407 12:56:55.491581  875154 ubuntu.go:193] setting minikube options for container-runtime
	I0407 12:56:55.491749  875154 config.go:182] Loaded profile config "addons-665428": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:56:55.491851  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:56:55.508632  875154 main.go:141] libmachine: Using SSH client type: native
	I0407 12:56:55.508861  875154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33289 <nil> <nil>}
	I0407 12:56:55.508879  875154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 12:56:55.725860  875154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 12:56:55.725891  875154 machine.go:96] duration metric: took 1.098768173s to provisionDockerMachine
	I0407 12:56:55.725908  875154 client.go:171] duration metric: took 12.182103422s to LocalClient.Create
	I0407 12:56:55.725924  875154 start.go:167] duration metric: took 12.182181722s to libmachine.API.Create "addons-665428"
	I0407 12:56:55.725933  875154 start.go:293] postStartSetup for "addons-665428" (driver="docker")
	I0407 12:56:55.725946  875154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 12:56:55.726004  875154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 12:56:55.726048  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:56:55.743579  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:56:55.834823  875154 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 12:56:55.838344  875154 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 12:56:55.838376  875154 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 12:56:55.838383  875154 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 12:56:55.838391  875154 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0407 12:56:55.838402  875154 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-866963/.minikube/addons for local assets ...
	I0407 12:56:55.838468  875154 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-866963/.minikube/files for local assets ...
	I0407 12:56:55.838492  875154 start.go:296] duration metric: took 112.551502ms for postStartSetup
	I0407 12:56:55.838803  875154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-665428
	I0407 12:56:55.856142  875154 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/config.json ...
	I0407 12:56:55.856439  875154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:56:55.856494  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:56:55.874220  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:56:55.966243  875154 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0407 12:56:55.970621  875154 start.go:128] duration metric: took 12.429364526s to createHost
	I0407 12:56:55.970650  875154 start.go:83] releasing machines lock for "addons-665428", held for 12.429494387s
	I0407 12:56:55.970714  875154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-665428
	I0407 12:56:55.987088  875154 ssh_runner.go:195] Run: cat /version.json
	I0407 12:56:55.987146  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:56:55.987154  875154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 12:56:55.987240  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:56:56.005532  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:56:56.005907  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:56:56.093347  875154 ssh_runner.go:195] Run: systemctl --version
	I0407 12:56:56.165538  875154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 12:56:56.305061  875154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 12:56:56.309512  875154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 12:56:56.328571  875154 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0407 12:56:56.328707  875154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 12:56:56.357007  875154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0407 12:56:56.357032  875154 start.go:495] detecting cgroup driver to use...
	I0407 12:56:56.357064  875154 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 12:56:56.357106  875154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 12:56:56.372356  875154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 12:56:56.383153  875154 docker.go:217] disabling cri-docker service (if available) ...
	I0407 12:56:56.383210  875154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 12:56:56.396446  875154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 12:56:56.410358  875154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 12:56:56.491175  875154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 12:56:56.572120  875154 docker.go:233] disabling docker service ...
	I0407 12:56:56.572182  875154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 12:56:56.591772  875154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 12:56:56.603134  875154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 12:56:56.687289  875154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 12:56:56.768561  875154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 12:56:56.779415  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:56:56.794845  875154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 12:56:56.794912  875154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:56.804490  875154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 12:56:56.804571  875154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:56.814116  875154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:56.823260  875154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:56.832684  875154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 12:56:56.841492  875154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:56.850972  875154 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:56.866096  875154 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:56.875434  875154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 12:56:56.883522  875154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 12:56:56.891399  875154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:56:56.961904  875154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 12:56:57.041939  875154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 12:56:57.042022  875154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 12:56:57.045531  875154 start.go:563] Will wait 60s for crictl version
	I0407 12:56:57.045618  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:56:57.048750  875154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 12:56:57.083057  875154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0407 12:56:57.083153  875154 ssh_runner.go:195] Run: crio --version
	I0407 12:56:57.120456  875154 ssh_runner.go:195] Run: crio --version
	I0407 12:56:57.158187  875154 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0407 12:56:57.159635  875154 cli_runner.go:164] Run: docker network inspect addons-665428 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 12:56:57.177097  875154 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0407 12:56:57.180870  875154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:56:57.191546  875154 kubeadm.go:883] updating cluster {Name:addons-665428 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-665428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 12:56:57.191695  875154 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:56:57.191754  875154 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 12:56:57.259582  875154 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 12:56:57.259619  875154 crio.go:433] Images already preloaded, skipping extraction
	I0407 12:56:57.259685  875154 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 12:56:57.294204  875154 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 12:56:57.294230  875154 cache_images.go:84] Images are preloaded, skipping loading
	I0407 12:56:57.294238  875154 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 crio true true} ...
	I0407 12:56:57.294325  875154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-665428 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-665428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 12:56:57.294390  875154 ssh_runner.go:195] Run: crio config
	I0407 12:56:57.339300  875154 cni.go:84] Creating CNI manager for ""
	I0407 12:56:57.339331  875154 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0407 12:56:57.339345  875154 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 12:56:57.339370  875154 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-665428 NodeName:addons-665428 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 12:56:57.339569  875154 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-665428"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 12:56:57.339650  875154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 12:56:57.348528  875154 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 12:56:57.348645  875154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 12:56:57.357271  875154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0407 12:56:57.375111  875154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 12:56:57.392550  875154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0407 12:56:57.409776  875154 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0407 12:56:57.413263  875154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:56:57.423570  875154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:56:57.498426  875154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:56:57.511619  875154 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428 for IP: 192.168.49.2
	I0407 12:56:57.511641  875154 certs.go:194] generating shared ca certs ...
	I0407 12:56:57.511657  875154 certs.go:226] acquiring lock for ca certs: {Name:mkab30815ad3439704ed93b8bcda25ece44f674f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:57.511813  875154 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-866963/.minikube/ca.key
	I0407 12:56:57.668187  875154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt ...
	I0407 12:56:57.668227  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt: {Name:mk29d23237faa0d74fae740e3c22c67fcff05c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:57.668448  875154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-866963/.minikube/ca.key ...
	I0407 12:56:57.668466  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/ca.key: {Name:mkeccac6eb47ea5aefc31e0a231ee237a2601b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:57.668598  875154 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-866963/.minikube/proxy-client-ca.key
	I0407 12:56:57.775353  875154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-866963/.minikube/proxy-client-ca.crt ...
	I0407 12:56:57.775390  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/proxy-client-ca.crt: {Name:mk3f5e48510590705bbe419aa20c5d78266d9587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:57.775608  875154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-866963/.minikube/proxy-client-ca.key ...
	I0407 12:56:57.775625  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/proxy-client-ca.key: {Name:mkc0a5b7d2b6af6152d2b7acea9123508b5a6bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:57.775729  875154 certs.go:256] generating profile certs ...
	I0407 12:56:57.775810  875154 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.key
	I0407 12:56:57.775835  875154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt with IP's: []
	I0407 12:56:58.191239  875154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt ...
	I0407 12:56:58.191293  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: {Name:mkefad86627797c06ef856c4729b98ea0e9fea82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:58.191527  875154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.key ...
	I0407 12:56:58.191545  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.key: {Name:mk9505585a647b46f3956c57ea7ad7e46b065a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:58.191670  875154 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.key.ebfc0e06
	I0407 12:56:58.191694  875154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.crt.ebfc0e06 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0407 12:56:58.282302  875154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.crt.ebfc0e06 ...
	I0407 12:56:58.282343  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.crt.ebfc0e06: {Name:mk50516f1cea29023a91cde78568f603b9a66171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:58.282570  875154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.key.ebfc0e06 ...
	I0407 12:56:58.282592  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.key.ebfc0e06: {Name:mk94e4c972b2584508a31a290808d40e6c43536b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:58.282714  875154 certs.go:381] copying /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.crt.ebfc0e06 -> /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.crt
	I0407 12:56:58.282831  875154 certs.go:385] copying /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.key.ebfc0e06 -> /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.key
	I0407 12:56:58.282901  875154 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/proxy-client.key
	I0407 12:56:58.282924  875154 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/proxy-client.crt with IP's: []
	I0407 12:56:58.364484  875154 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/proxy-client.crt ...
	I0407 12:56:58.364525  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/proxy-client.crt: {Name:mke4c0896b1939bd931bea80a34a5629144157a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:58.364726  875154 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/proxy-client.key ...
	I0407 12:56:58.364745  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/proxy-client.key: {Name:mkc8a6442c7553efb06efdd244ef0fe883af03cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:58.365035  875154 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-866963/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 12:56:58.365080  875154 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-866963/.minikube/certs/ca.pem (1082 bytes)
	I0407 12:56:58.365108  875154 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-866963/.minikube/certs/cert.pem (1123 bytes)
	I0407 12:56:58.365143  875154 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-866963/.minikube/certs/key.pem (1679 bytes)
	I0407 12:56:58.365806  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 12:56:58.389609  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0407 12:56:58.412356  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 12:56:58.435941  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 12:56:58.459353  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 12:56:58.482822  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 12:56:58.506438  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 12:56:58.530235  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 12:56:58.554075  875154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 12:56:58.579685  875154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 12:56:58.597745  875154 ssh_runner.go:195] Run: openssl version
	I0407 12:56:58.603316  875154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 12:56:58.613241  875154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:56:58.616807  875154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:56:58.616875  875154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:56:58.623757  875154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 12:56:58.632932  875154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 12:56:58.636301  875154 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 12:56:58.636362  875154 kubeadm.go:392] StartCluster: {Name:addons-665428 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-665428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:56:58.636457  875154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 12:56:58.636501  875154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 12:56:58.671518  875154 cri.go:89] found id: ""
	I0407 12:56:58.671614  875154 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 12:56:58.680111  875154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 12:56:58.688355  875154 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0407 12:56:58.688427  875154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 12:56:58.696677  875154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 12:56:58.696696  875154 kubeadm.go:157] found existing configuration files:
	
	I0407 12:56:58.696755  875154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 12:56:58.704877  875154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 12:56:58.704942  875154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 12:56:58.713109  875154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 12:56:58.721457  875154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 12:56:58.721524  875154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 12:56:58.729996  875154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 12:56:58.738443  875154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 12:56:58.738532  875154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 12:56:58.746547  875154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 12:56:58.754352  875154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 12:56:58.754420  875154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 12:56:58.762692  875154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0407 12:56:58.799495  875154 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 12:56:58.799573  875154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 12:56:58.817660  875154 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0407 12:56:58.817778  875154 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0407 12:56:58.817830  875154 kubeadm.go:310] OS: Linux
	I0407 12:56:58.817900  875154 kubeadm.go:310] CGROUPS_CPU: enabled
	I0407 12:56:58.817979  875154 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0407 12:56:58.818045  875154 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0407 12:56:58.818113  875154 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0407 12:56:58.818200  875154 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0407 12:56:58.818246  875154 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0407 12:56:58.818290  875154 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0407 12:56:58.818336  875154 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0407 12:56:58.818404  875154 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0407 12:56:58.869994  875154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 12:56:58.870112  875154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 12:56:58.870236  875154 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 12:56:58.876943  875154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 12:56:58.880751  875154 out.go:235]   - Generating certificates and keys ...
	I0407 12:56:58.880856  875154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 12:56:58.880931  875154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 12:56:59.118202  875154 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 12:56:59.189270  875154 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 12:56:59.353191  875154 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 12:56:59.568634  875154 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 12:56:59.855733  875154 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 12:56:59.855885  875154 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-665428 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0407 12:57:00.011937  875154 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 12:57:00.012060  875154 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-665428 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0407 12:57:00.182213  875154 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 12:57:00.463670  875154 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 12:57:00.691226  875154 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 12:57:00.691313  875154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 12:57:00.961580  875154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 12:57:01.198640  875154 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 12:57:01.313355  875154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 12:57:01.451358  875154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 12:57:01.772024  875154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 12:57:01.772495  875154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 12:57:01.774868  875154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 12:57:01.777211  875154 out.go:235]   - Booting up control plane ...
	I0407 12:57:01.777345  875154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 12:57:01.777451  875154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 12:57:01.778187  875154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 12:57:01.787750  875154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 12:57:01.793364  875154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 12:57:01.793444  875154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 12:57:01.876164  875154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 12:57:01.876306  875154 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 12:57:02.878010  875154 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001987449s
	I0407 12:57:02.878157  875154 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 12:57:07.379915  875154 kubeadm.go:310] [api-check] The API server is healthy after 4.501804217s
	I0407 12:57:07.391650  875154 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 12:57:07.402767  875154 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 12:57:07.422880  875154 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 12:57:07.423219  875154 kubeadm.go:310] [mark-control-plane] Marking the node addons-665428 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 12:57:07.431083  875154 kubeadm.go:310] [bootstrap-token] Using token: zskoci.n83fy55du8m4eoe3
	I0407 12:57:07.433748  875154 out.go:235]   - Configuring RBAC rules ...
	I0407 12:57:07.433934  875154 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 12:57:07.438409  875154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 12:57:07.446438  875154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 12:57:07.448983  875154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 12:57:07.451291  875154 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 12:57:07.454943  875154 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 12:57:07.786875  875154 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 12:57:08.215675  875154 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 12:57:08.788061  875154 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 12:57:08.788830  875154 kubeadm.go:310] 
	I0407 12:57:08.788923  875154 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 12:57:08.788933  875154 kubeadm.go:310] 
	I0407 12:57:08.789029  875154 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 12:57:08.789040  875154 kubeadm.go:310] 
	I0407 12:57:08.789082  875154 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 12:57:08.789172  875154 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 12:57:08.789218  875154 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 12:57:08.789243  875154 kubeadm.go:310] 
	I0407 12:57:08.789374  875154 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 12:57:08.789384  875154 kubeadm.go:310] 
	I0407 12:57:08.789445  875154 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 12:57:08.789454  875154 kubeadm.go:310] 
	I0407 12:57:08.789516  875154 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 12:57:08.789640  875154 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 12:57:08.789732  875154 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 12:57:08.789751  875154 kubeadm.go:310] 
	I0407 12:57:08.789866  875154 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 12:57:08.789985  875154 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 12:57:08.789995  875154 kubeadm.go:310] 
	I0407 12:57:08.790104  875154 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zskoci.n83fy55du8m4eoe3 \
	I0407 12:57:08.790249  875154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:feb9141b8f8b21e4e1c19995947b79c6fc58013b4cb57d9d327b012e52b8ac63 \
	I0407 12:57:08.790276  875154 kubeadm.go:310] 	--control-plane 
	I0407 12:57:08.790283  875154 kubeadm.go:310] 
	I0407 12:57:08.790355  875154 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 12:57:08.790386  875154 kubeadm.go:310] 
	I0407 12:57:08.790519  875154 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zskoci.n83fy55du8m4eoe3 \
	I0407 12:57:08.790662  875154 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:feb9141b8f8b21e4e1c19995947b79c6fc58013b4cb57d9d327b012e52b8ac63 
	I0407 12:57:08.792685  875154 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0407 12:57:08.792983  875154 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0407 12:57:08.793124  875154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 12:57:08.793163  875154 cni.go:84] Creating CNI manager for ""
	I0407 12:57:08.793174  875154 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0407 12:57:08.795156  875154 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0407 12:57:08.796441  875154 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0407 12:57:08.800464  875154 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0407 12:57:08.800493  875154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0407 12:57:08.819682  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0407 12:57:09.028352  875154 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 12:57:09.028415  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:09.028457  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-665428 minikube.k8s.io/updated_at=2025_04_07T12_57_09_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=addons-665428 minikube.k8s.io/primary=true
	I0407 12:57:09.035715  875154 ops.go:34] apiserver oom_adj: -16
	I0407 12:57:09.130187  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:09.630547  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:10.130651  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:10.630451  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:11.131210  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:11.630792  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:12.130755  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:12.631028  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:13.130772  875154 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:57:13.221588  875154 kubeadm.go:1113] duration metric: took 4.193232155s to wait for elevateKubeSystemPrivileges
	I0407 12:57:13.221633  875154 kubeadm.go:394] duration metric: took 14.585276264s to StartCluster
	I0407 12:57:13.221661  875154 settings.go:142] acquiring lock: {Name:mk19dba715dc20e10498f6e0e0101f8474bf8293 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:57:13.221788  875154 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-866963/kubeconfig
	I0407 12:57:13.222304  875154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/kubeconfig: {Name:mk48216275c905661531fbf615e7016736575b40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:57:13.222541  875154 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 12:57:13.222595  875154 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0407 12:57:13.222582  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 12:57:13.222714  875154 addons.go:69] Setting yakd=true in profile "addons-665428"
	I0407 12:57:13.222733  875154 addons.go:238] Setting addon yakd=true in "addons-665428"
	I0407 12:57:13.222758  875154 addons.go:69] Setting gcp-auth=true in profile "addons-665428"
	I0407 12:57:13.222752  875154 addons.go:69] Setting default-storageclass=true in profile "addons-665428"
	I0407 12:57:13.222778  875154 mustload.go:65] Loading cluster: addons-665428
	I0407 12:57:13.222777  875154 addons.go:69] Setting registry=true in profile "addons-665428"
	I0407 12:57:13.222756  875154 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-665428"
	I0407 12:57:13.222803  875154 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-665428"
	I0407 12:57:13.222815  875154 addons.go:238] Setting addon registry=true in "addons-665428"
	I0407 12:57:13.222815  875154 config.go:182] Loaded profile config "addons-665428": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:57:13.222831  875154 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-665428"
	I0407 12:57:13.222833  875154 addons.go:69] Setting ingress=true in profile "addons-665428"
	I0407 12:57:13.222830  875154 addons.go:69] Setting inspektor-gadget=true in profile "addons-665428"
	I0407 12:57:13.222849  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.222860  875154 addons.go:238] Setting addon inspektor-gadget=true in "addons-665428"
	I0407 12:57:13.222865  875154 addons.go:238] Setting addon ingress=true in "addons-665428"
	I0407 12:57:13.222774  875154 addons.go:69] Setting ingress-dns=true in profile "addons-665428"
	I0407 12:57:13.222898  875154 addons.go:238] Setting addon ingress-dns=true in "addons-665428"
	I0407 12:57:13.222910  875154 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-665428"
	I0407 12:57:13.222929  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.222929  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.222935  875154 addons.go:69] Setting metrics-server=true in profile "addons-665428"
	I0407 12:57:13.222943  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.222960  875154 addons.go:238] Setting addon metrics-server=true in "addons-665428"
	I0407 12:57:13.222975  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.223012  875154 config.go:182] Loaded profile config "addons-665428": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:57:13.223248  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.223335  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.223360  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.223366  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.223465  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.223491  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.223567  875154 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-665428"
	I0407 12:57:13.223599  875154 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-665428"
	I0407 12:57:13.223895  875154 addons.go:69] Setting volcano=true in profile "addons-665428"
	I0407 12:57:13.224033  875154 addons.go:238] Setting addon volcano=true in "addons-665428"
	I0407 12:57:13.224238  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.222768  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.222929  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.222819  875154 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-665428"
	I0407 12:57:13.222821  875154 addons.go:69] Setting cloud-spanner=true in profile "addons-665428"
	I0407 12:57:13.223968  875154 addons.go:69] Setting storage-provisioner=true in profile "addons-665428"
	I0407 12:57:13.224369  875154 addons.go:238] Setting addon storage-provisioner=true in "addons-665428"
	I0407 12:57:13.224439  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.222816  875154 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-665428"
	I0407 12:57:13.224553  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.223955  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.222791  875154 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-665428"
	I0407 12:57:13.224876  875154 addons.go:69] Setting volumesnapshots=true in profile "addons-665428"
	I0407 12:57:13.225417  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.226235  875154 addons.go:238] Setting addon volumesnapshots=true in "addons-665428"
	I0407 12:57:13.226278  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.226758  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.227528  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.225446  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.225498  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.228107  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.225510  875154 addons.go:238] Setting addon cloud-spanner=true in "addons-665428"
	I0407 12:57:13.229890  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.225572  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.225677  875154 out.go:177] * Verifying Kubernetes components...
	I0407 12:57:13.225929  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.235384  875154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:57:13.253922  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.254015  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.258887  875154 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0407 12:57:13.263160  875154 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0407 12:57:13.265189  875154 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0407 12:57:13.266828  875154 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:57:13.266896  875154 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0407 12:57:13.268857  875154 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:57:13.268896  875154 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0407 12:57:13.270295  875154 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0407 12:57:13.270301  875154 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0407 12:57:13.270507  875154 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0407 12:57:13.270528  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0407 12:57:13.270596  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.273664  875154 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0407 12:57:13.273690  875154 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0407 12:57:13.273715  875154 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0407 12:57:13.273761  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.273974  875154 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0407 12:57:13.273980  875154 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0407 12:57:13.275084  875154 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0407 12:57:13.275117  875154 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0407 12:57:13.275183  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.275520  875154 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 12:57:13.275535  875154 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 12:57:13.275602  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.278431  875154 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0407 12:57:13.280162  875154 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0407 12:57:13.281457  875154 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0407 12:57:13.281588  875154 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0407 12:57:13.281606  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0407 12:57:13.281680  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.282690  875154 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0407 12:57:13.282736  875154 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-665428"
	I0407 12:57:13.282748  875154 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0407 12:57:13.282778  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.282800  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.283256  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.287021  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.311721  875154 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0407 12:57:13.313085  875154 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0407 12:57:13.314443  875154 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0407 12:57:13.314463  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0407 12:57:13.314538  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.319290  875154 out.go:177]   - Using image docker.io/registry:2.8.3
	W0407 12:57:13.320635  875154 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0407 12:57:13.321974  875154 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0407 12:57:13.321996  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0407 12:57:13.322059  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.338876  875154 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0407 12:57:13.339082  875154 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0407 12:57:13.340266  875154 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 12:57:13.340401  875154 addons.go:238] Setting addon default-storageclass=true in "addons-665428"
	I0407 12:57:13.340446  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:13.340585  875154 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:57:13.340767  875154 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:57:13.340782  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0407 12:57:13.340842  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.341053  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:13.341087  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0407 12:57:13.341136  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.341855  875154 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:57:13.341872  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 12:57:13.341916  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.342827  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.349280  875154 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0407 12:57:13.349661  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.350782  875154 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0407 12:57:13.350803  875154 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0407 12:57:13.350869  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.353324  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.357406  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.360096  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.360358  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.364675  875154 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0407 12:57:13.366034  875154 out.go:177]   - Using image docker.io/busybox:stable
	I0407 12:57:13.367358  875154 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0407 12:57:13.367378  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0407 12:57:13.367432  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.375461  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.381789  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.384242  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.386675  875154 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 12:57:13.386696  875154 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 12:57:13.386754  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:13.410649  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.411195  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.413331  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.418389  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:13.429537  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	W0407 12:57:13.495896  875154 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0407 12:57:13.495964  875154 retry.go:31] will retry after 206.985246ms: ssh: handshake failed: EOF
	I0407 12:57:13.506011  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 12:57:13.701148  875154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:57:13.705349  875154 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0407 12:57:13.705382  875154 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0407 12:57:13.809051  875154 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0407 12:57:13.809087  875154 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0407 12:57:13.814899  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0407 12:57:13.817047  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0407 12:57:13.818367  875154 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0407 12:57:13.818442  875154 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0407 12:57:13.896335  875154 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:57:13.896431  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0407 12:57:13.902433  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0407 12:57:13.907898  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:57:13.994961  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0407 12:57:13.995350  875154 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 12:57:13.995376  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0407 12:57:13.995990  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:57:13.999734  875154 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0407 12:57:13.999759  875154 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0407 12:57:14.015476  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:57:14.093843  875154 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0407 12:57:14.093938  875154 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0407 12:57:14.097616  875154 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0407 12:57:14.097714  875154 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0407 12:57:14.103518  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:57:14.114057  875154 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 12:57:14.114157  875154 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 12:57:14.205188  875154 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:57:14.205275  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0407 12:57:14.214276  875154 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0407 12:57:14.214366  875154 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0407 12:57:14.295725  875154 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0407 12:57:14.295815  875154 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0407 12:57:14.311047  875154 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0407 12:57:14.311080  875154 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0407 12:57:14.613953  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:57:14.619858  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:57:14.695701  875154 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0407 12:57:14.695800  875154 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0407 12:57:14.701167  875154 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0407 12:57:14.701249  875154 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0407 12:57:14.708066  875154 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:57:14.708168  875154 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 12:57:14.712069  875154 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0407 12:57:14.712156  875154 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0407 12:57:14.994834  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:57:14.997949  875154 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0407 12:57:14.998046  875154 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0407 12:57:15.094706  875154 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.588639093s)
	I0407 12:57:15.094975  875154 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0407 12:57:15.094921  875154 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.393682883s)
	I0407 12:57:15.097261  875154 node_ready.go:35] waiting up to 6m0s for node "addons-665428" to be "Ready" ...
	I0407 12:57:15.211078  875154 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0407 12:57:15.211175  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0407 12:57:15.212383  875154 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:57:15.212459  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0407 12:57:15.215761  875154 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:57:15.215841  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0407 12:57:15.507758  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:57:15.694124  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:57:15.701726  875154 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0407 12:57:15.701821  875154 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0407 12:57:15.795440  875154 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-665428" context rescaled to 1 replicas
	I0407 12:57:16.001592  875154 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0407 12:57:16.001699  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0407 12:57:16.494404  875154 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0407 12:57:16.494437  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0407 12:57:16.908254  875154 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:57:16.908373  875154 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0407 12:57:17.109104  875154 node_ready.go:53] node "addons-665428" has status "Ready":"False"
	I0407 12:57:17.114216  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:57:19.607868  875154 node_ready.go:53] node "addons-665428" has status "Ready":"False"
	I0407 12:57:19.812525  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.997577269s)
	I0407 12:57:19.812630  875154 addons.go:479] Verifying addon ingress=true in "addons-665428"
	I0407 12:57:19.812767  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.995680893s)
	I0407 12:57:19.812933  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.910412115s)
	I0407 12:57:19.813204  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.905274842s)
	I0407 12:57:19.813349  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.818354226s)
	I0407 12:57:19.813399  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.81738222s)
	I0407 12:57:19.813448  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.797940181s)
	I0407 12:57:19.813596  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.199551391s)
	I0407 12:57:19.813618  875154 addons.go:479] Verifying addon registry=true in "addons-665428"
	I0407 12:57:19.813647  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.193707851s)
	I0407 12:57:19.813870  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.818941759s)
	I0407 12:57:19.814383  875154 addons.go:479] Verifying addon metrics-server=true in "addons-665428"
	I0407 12:57:19.813908  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.306052138s)
	I0407 12:57:19.813963  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.709958073s)
	I0407 12:57:19.815613  875154 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-665428 service yakd-dashboard -n yakd-dashboard
	
	I0407 12:57:19.815717  875154 out.go:177] * Verifying registry addon...
	I0407 12:57:19.815783  875154 out.go:177] * Verifying ingress addon...
	I0407 12:57:19.817907  875154 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0407 12:57:19.819068  875154 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0407 12:57:19.823567  875154 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0407 12:57:19.823599  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0407 12:57:19.823576  875154 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0407 12:57:19.896263  875154 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0407 12:57:19.896296  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:20.298775  875154 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0407 12:57:20.298863  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:20.323533  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:20.325240  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:20.325920  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:20.718092  875154 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0407 12:57:20.808940  875154 addons.go:238] Setting addon gcp-auth=true in "addons-665428"
	I0407 12:57:20.809028  875154 host.go:66] Checking if "addons-665428" exists ...
	I0407 12:57:20.809626  875154 cli_runner.go:164] Run: docker container inspect addons-665428 --format={{.State.Status}}
	I0407 12:57:20.819532  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.125274487s)
	W0407 12:57:20.819583  875154 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:57:20.819609  875154 retry.go:31] will retry after 278.965526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:57:20.827679  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:20.829336  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:20.835097  875154 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0407 12:57:20.835165  875154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-665428
	I0407 12:57:20.853702  875154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/addons-665428/id_rsa Username:docker}
	I0407 12:57:21.099250  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:57:21.324845  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:21.325007  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:21.506749  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.392464663s)
	I0407 12:57:21.506793  875154 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-665428"
	I0407 12:57:21.508820  875154 out.go:177] * Verifying csi-hostpath-driver addon...
	I0407 12:57:21.508829  875154 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:57:21.510433  875154 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0407 12:57:21.511206  875154 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0407 12:57:21.511845  875154 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0407 12:57:21.511869  875154 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0407 12:57:21.526447  875154 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0407 12:57:21.526472  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:21.599389  875154 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0407 12:57:21.599432  875154 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0407 12:57:21.619282  875154 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:57:21.619306  875154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0407 12:57:21.638736  875154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:57:21.822256  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:21.822256  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:22.018748  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:22.101069  875154 node_ready.go:53] node "addons-665428" has status "Ready":"False"
	I0407 12:57:22.332851  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:22.332913  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:22.514964  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:22.820967  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:22.821519  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:23.014562  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:23.321350  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:23.322192  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:23.515071  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:23.821724  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:23.822357  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:23.912755  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.813451867s)
	I0407 12:57:23.912789  875154 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.27402056s)
	I0407 12:57:23.913909  875154 addons.go:479] Verifying addon gcp-auth=true in "addons-665428"
	I0407 12:57:23.915803  875154 out.go:177] * Verifying gcp-auth addon...
	I0407 12:57:23.917796  875154 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0407 12:57:23.920171  875154 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0407 12:57:23.920187  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:24.015077  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:24.322112  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:24.322193  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:24.421173  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:24.515071  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:24.600850  875154 node_ready.go:53] node "addons-665428" has status "Ready":"False"
	I0407 12:57:24.822029  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:24.822134  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:24.920922  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:25.015044  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:25.321164  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:25.321918  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:25.420619  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:25.514846  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:25.822097  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:25.822189  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:25.920740  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:26.014636  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:26.322252  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:26.322303  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:26.421122  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:26.515255  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:26.601076  875154 node_ready.go:53] node "addons-665428" has status "Ready":"False"
	I0407 12:57:26.821056  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:26.821721  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:26.922362  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:27.023142  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:27.321629  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:27.322533  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:27.421494  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:27.514778  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:27.821907  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:27.821997  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:27.921870  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:28.014922  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:28.322989  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:28.323164  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:28.420956  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:28.514962  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:28.601316  875154 node_ready.go:53] node "addons-665428" has status "Ready":"False"
	I0407 12:57:28.822302  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:28.822349  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:28.920537  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:29.014834  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:29.321395  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:29.322202  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:29.421150  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:29.514746  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:29.821770  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:29.822860  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:29.921804  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:30.015146  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:30.322466  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:30.322586  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:30.421253  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:30.515436  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:30.821633  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:30.822380  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:30.921324  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:31.014112  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:31.100666  875154 node_ready.go:53] node "addons-665428" has status "Ready":"False"
	I0407 12:57:31.322014  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:31.322776  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:31.420913  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:31.514706  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:31.821277  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:31.821959  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:31.920719  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:32.014563  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:32.321926  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:32.322314  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:32.421240  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:32.514434  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:32.823150  875154 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0407 12:57:32.823180  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:32.823277  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:32.921382  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:33.021383  875154 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0407 12:57:33.021469  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:33.101886  875154 node_ready.go:49] node "addons-665428" has status "Ready":"True"
	I0407 12:57:33.101914  875154 node_ready.go:38] duration metric: took 18.00457863s for node "addons-665428" to be "Ready" ...
	I0407 12:57:33.101925  875154 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:57:33.111851  875154 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:33.323553  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:33.424659  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:33.424913  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:33.524752  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:33.821354  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:33.821833  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:33.921741  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:34.014964  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:34.321071  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:34.321977  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:34.420636  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:34.518725  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:34.822504  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:34.822597  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:34.921453  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:35.014907  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:35.117085  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:35.321451  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:35.322151  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:35.495720  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:35.596367  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:35.821139  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:35.821789  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:35.921935  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:36.015177  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:36.322387  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:36.322568  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:36.421604  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:36.514814  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:36.821634  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:36.821690  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:36.921845  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:37.015581  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:37.117986  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:37.321078  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:37.321915  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:37.421110  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:37.515873  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:37.821504  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:37.822322  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:37.921447  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:38.014671  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:38.322312  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:38.322473  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:38.421275  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:38.515569  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:38.822175  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:38.822314  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:38.921069  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:39.015911  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:39.118547  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:39.322508  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:39.322770  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:39.422898  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:39.524197  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:39.821513  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:39.822162  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:39.921062  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:40.015668  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:40.321505  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:40.322558  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:40.422009  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:40.523287  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:40.822393  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:40.822448  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:40.921289  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:41.015487  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:41.321475  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:41.322186  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:41.421480  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:41.522625  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:41.618082  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:41.821199  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:41.821868  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:41.921768  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:42.017864  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:42.322165  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:42.322214  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:42.421789  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:42.524596  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:42.821907  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:42.822903  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:42.921164  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:43.015674  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:43.322543  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:43.323117  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:43.421423  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:43.522453  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:43.821896  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:43.822334  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:43.921280  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:44.015645  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:44.117474  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:44.322280  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:44.322294  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:44.424461  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:44.524555  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:44.821895  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:44.821941  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:44.920384  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:45.015713  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:45.321901  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:45.322058  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:45.421357  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:45.514666  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:45.821904  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:45.821935  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:45.921613  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:46.014997  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:46.321578  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:46.322130  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:46.420863  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:46.515075  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:46.616925  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:46.820927  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:46.821686  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:46.921381  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:47.015010  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:47.321364  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:47.322342  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:47.420955  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:47.515234  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:47.821820  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:47.822027  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:47.921437  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:48.015088  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:48.322353  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:48.322371  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:48.421188  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:48.515589  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:48.617624  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:48.822151  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:48.822799  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:48.921855  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:49.015400  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:49.321844  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:49.321909  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:49.420799  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:49.516289  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:49.822388  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:49.822432  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:49.921641  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:50.014910  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:50.321633  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:50.322294  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:50.421197  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:50.516094  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:50.618786  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:50.821182  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:50.821755  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:50.921371  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:51.014657  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:51.322111  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:51.322491  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:51.421597  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:51.515323  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:51.822128  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:51.822552  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:51.922841  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:52.014945  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:52.321559  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:52.322334  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:52.421169  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:52.515464  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:52.822454  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:52.822490  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:52.921594  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:53.015549  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:53.117247  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:53.322162  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:53.322594  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:53.421543  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:53.514585  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:53.822484  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:53.822611  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:53.920807  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:54.014757  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:54.321723  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:54.322349  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:54.421183  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:54.515473  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:54.821740  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:54.822114  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:54.920960  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:55.015335  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:55.321906  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:55.322153  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:55.420627  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:55.515090  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:55.617417  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:55.822043  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:55.822179  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:55.920758  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:56.014866  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:56.321268  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:56.321848  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:56.422132  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:56.515466  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:56.821997  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:56.822039  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:56.920625  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:57.014944  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:57.321106  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:57.321954  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:57.420967  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:57.515465  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:57.619441  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:57.821856  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:57.822943  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:57.920601  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:58.014912  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:58.321714  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:58.322196  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:58.421180  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:58.516159  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:58.821466  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:58.822103  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:58.920859  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:59.021686  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:59.322840  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:59.322871  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:59.423679  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:59.514371  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:59.822119  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:59.822354  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:59.921017  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:00.015196  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:00.117123  875154 pod_ready.go:103] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:00.321442  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:00.322088  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:00.420830  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:00.515706  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:00.617655  875154 pod_ready.go:93] pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace has status "Ready":"True"
	I0407 12:58:00.617683  875154 pod_ready.go:82] duration metric: took 27.505797079s for pod "amd-gpu-device-plugin-cwhmr" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.617696  875154 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8n9kz" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.621825  875154 pod_ready.go:93] pod "coredns-668d6bf9bc-8n9kz" in "kube-system" namespace has status "Ready":"True"
	I0407 12:58:00.621848  875154 pod_ready.go:82] duration metric: took 4.144229ms for pod "coredns-668d6bf9bc-8n9kz" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.621866  875154 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-665428" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.625758  875154 pod_ready.go:93] pod "etcd-addons-665428" in "kube-system" namespace has status "Ready":"True"
	I0407 12:58:00.625783  875154 pod_ready.go:82] duration metric: took 3.91036ms for pod "etcd-addons-665428" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.625795  875154 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-665428" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.629712  875154 pod_ready.go:93] pod "kube-apiserver-addons-665428" in "kube-system" namespace has status "Ready":"True"
	I0407 12:58:00.629739  875154 pod_ready.go:82] duration metric: took 3.933852ms for pod "kube-apiserver-addons-665428" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.629752  875154 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-665428" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.633345  875154 pod_ready.go:93] pod "kube-controller-manager-addons-665428" in "kube-system" namespace has status "Ready":"True"
	I0407 12:58:00.633373  875154 pod_ready.go:82] duration metric: took 3.614373ms for pod "kube-controller-manager-addons-665428" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.633388  875154 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-72jdn" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:00.820949  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:00.822007  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:00.921031  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:01.015306  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:01.015727  875154 pod_ready.go:93] pod "kube-proxy-72jdn" in "kube-system" namespace has status "Ready":"True"
	I0407 12:58:01.015746  875154 pod_ready.go:82] duration metric: took 382.3505ms for pod "kube-proxy-72jdn" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:01.015755  875154 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-665428" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:01.321892  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:01.322086  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:01.415533  875154 pod_ready.go:93] pod "kube-scheduler-addons-665428" in "kube-system" namespace has status "Ready":"True"
	I0407 12:58:01.415568  875154 pod_ready.go:82] duration metric: took 399.80387ms for pod "kube-scheduler-addons-665428" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:01.415580  875154 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-5hlgg" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:01.421347  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:01.514854  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:01.816347  875154 pod_ready.go:93] pod "metrics-server-7fbb699795-5hlgg" in "kube-system" namespace has status "Ready":"True"
	I0407 12:58:01.816373  875154 pod_ready.go:82] duration metric: took 400.786955ms for pod "metrics-server-7fbb699795-5hlgg" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:01.816386  875154 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:01.821613  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:01.822148  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:01.921149  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:02.015662  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:02.321507  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:02.322163  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:02.420324  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:02.514900  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:02.820570  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:02.823038  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:02.922021  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:03.015712  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:03.321548  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:03.321834  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:03.421117  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:03.515947  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:03.821605  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:03.821693  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:03.821896  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:03.920628  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:04.014739  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:04.320459  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:04.321704  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:04.421163  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:04.515151  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:04.821656  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:04.822083  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:04.921547  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:05.014692  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:05.320574  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:05.322156  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:05.421225  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:05.514067  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:05.821370  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:05.822289  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:05.822371  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:05.921375  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:06.014992  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:06.320842  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:06.322128  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:06.421134  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:06.515547  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:06.821868  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:06.821878  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:06.920876  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:07.015661  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:07.321644  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:07.322144  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:07.421511  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:07.514962  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:07.821147  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:07.821741  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:07.822330  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:07.921893  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:08.015674  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:08.320629  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:08.321719  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:08.421345  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:08.514790  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:08.820504  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:08.821529  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:08.921054  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:09.015257  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:09.322106  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:09.322178  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:09.421415  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:09.514841  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:09.821043  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:09.822213  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:09.921667  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:10.014938  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:10.321705  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:10.322416  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:10.323185  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:10.422130  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:10.522529  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:10.822228  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:10.822248  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:10.921505  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:11.014785  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:11.320742  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:11.321496  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:11.421376  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:11.515498  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:11.821601  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:11.821685  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:11.922413  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:12.023549  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:12.321432  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:12.321592  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:12.420847  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:12.515449  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:12.820969  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:12.821660  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:12.821838  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:12.922329  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:13.014854  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:13.321595  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:13.321856  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:13.421714  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:13.514879  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:13.820915  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:13.821473  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:13.920628  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:14.014697  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:14.320863  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:14.321636  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:14.421114  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:14.515239  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:14.821358  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:14.822339  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:14.822784  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:14.921680  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:15.015168  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:15.321123  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:15.322083  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:15.421221  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:15.515428  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:15.821662  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:15.821711  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:15.921417  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:16.014756  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:16.320682  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:16.321517  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:16.420987  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:16.514953  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:16.820885  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:16.821835  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:16.921137  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:17.022178  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:17.323154  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:17.323428  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:17.323497  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:17.421807  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:17.515477  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:17.821532  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:17.822050  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:17.921612  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:18.015490  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:18.323553  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:18.323567  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:18.421942  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:18.515993  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:18.899437  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:18.900712  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:18.996913  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:19.016269  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:19.401669  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:19.401972  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:19.402423  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:19.611528  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:19.611841  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:19.823108  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:19.823560  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:19.921410  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:20.014783  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:20.320715  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:20.322343  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:20.420806  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:20.515265  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:20.821254  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:20.821491  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:20.921954  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:21.015322  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:21.323369  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:21.323505  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:21.421392  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:21.514887  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:21.821051  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:21.821503  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:21.822185  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:21.921913  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:22.015742  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:22.322909  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:22.322981  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:22.422432  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:22.515151  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:22.820774  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:22.822136  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:22.921036  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:23.015471  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:23.321718  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:23.321771  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:23.421637  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:23.515129  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:23.821390  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:23.821926  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:23.822647  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:23.921549  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:24.014814  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:24.320670  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:24.321546  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:24.420822  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:24.515063  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:24.820941  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:24.821969  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:24.921989  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:25.015384  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:25.321123  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:25.322080  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:25.421735  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:25.515340  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:25.822251  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:25.822255  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:25.921226  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:26.016052  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:26.320923  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:26.322080  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:26.322608  875154 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"False"
	I0407 12:58:26.421465  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:26.514587  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:26.821453  875154 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace has status "Ready":"True"
	I0407 12:58:26.821479  875154 pod_ready.go:82] duration metric: took 25.005083973s for pod "nvidia-device-plugin-daemonset-j7hn6" in "kube-system" namespace to be "Ready" ...
	I0407 12:58:26.821496  875154 pod_ready.go:39] duration metric: took 53.719552845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:58:26.821520  875154 api_server.go:52] waiting for apiserver process to appear ...
	I0407 12:58:26.821565  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 12:58:26.821637  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 12:58:26.821976  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:26.822021  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:26.858322  875154 cri.go:89] found id: "75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb"
	I0407 12:58:26.858345  875154 cri.go:89] found id: ""
	I0407 12:58:26.858353  875154 logs.go:282] 1 containers: [75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb]
	I0407 12:58:26.858405  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:26.861880  875154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 12:58:26.861946  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 12:58:26.896027  875154 cri.go:89] found id: "ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45"
	I0407 12:58:26.896050  875154 cri.go:89] found id: ""
	I0407 12:58:26.896057  875154 logs.go:282] 1 containers: [ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45]
	I0407 12:58:26.896113  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:26.899709  875154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 12:58:26.899772  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 12:58:26.921526  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:26.935638  875154 cri.go:89] found id: "20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a"
	I0407 12:58:26.935661  875154 cri.go:89] found id: ""
	I0407 12:58:26.935669  875154 logs.go:282] 1 containers: [20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a]
	I0407 12:58:26.935728  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:26.939302  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 12:58:26.939381  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 12:58:26.977072  875154 cri.go:89] found id: "f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa"
	I0407 12:58:26.977097  875154 cri.go:89] found id: ""
	I0407 12:58:26.977105  875154 logs.go:282] 1 containers: [f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa]
	I0407 12:58:26.977151  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:26.980569  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 12:58:26.980636  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 12:58:27.015249  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:27.016816  875154 cri.go:89] found id: "f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1"
	I0407 12:58:27.016836  875154 cri.go:89] found id: ""
	I0407 12:58:27.016843  875154 logs.go:282] 1 containers: [f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1]
	I0407 12:58:27.016888  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:27.020423  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 12:58:27.020493  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 12:58:27.057722  875154 cri.go:89] found id: "abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199"
	I0407 12:58:27.057749  875154 cri.go:89] found id: ""
	I0407 12:58:27.057756  875154 logs.go:282] 1 containers: [abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199]
	I0407 12:58:27.057811  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:27.061344  875154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 12:58:27.061417  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 12:58:27.096466  875154 cri.go:89] found id: "435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9"
	I0407 12:58:27.096488  875154 cri.go:89] found id: ""
	I0407 12:58:27.096495  875154 logs.go:282] 1 containers: [435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9]
	I0407 12:58:27.096541  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:27.100274  875154 logs.go:123] Gathering logs for kubelet ...
	I0407 12:58:27.100302  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 12:58:27.156375  875154 logs.go:138] Found kubelet problem: Apr 07 12:57:32 addons-665428 kubelet[1651]: W0407 12:57:32.655690    1651 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-665428" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-665428' and this object
	W0407 12:58:27.156593  875154 logs.go:138] Found kubelet problem: Apr 07 12:57:32 addons-665428 kubelet[1651]: E0407 12:57:32.655741    1651 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-665428\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-665428' and this object" logger="UnhandledError"
	I0407 12:58:27.185621  875154 logs.go:123] Gathering logs for dmesg ...
	I0407 12:58:27.185668  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 12:58:27.213034  875154 logs.go:123] Gathering logs for etcd [ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45] ...
	I0407 12:58:27.213089  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45"
	I0407 12:58:27.263009  875154 logs.go:123] Gathering logs for kube-scheduler [f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa] ...
	I0407 12:58:27.263053  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa"
	I0407 12:58:27.303312  875154 logs.go:123] Gathering logs for kube-proxy [f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1] ...
	I0407 12:58:27.303350  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1"
	I0407 12:58:27.322094  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:27.322229  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:27.339598  875154 logs.go:123] Gathering logs for container status ...
	I0407 12:58:27.339628  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 12:58:27.381868  875154 logs.go:123] Gathering logs for describe nodes ...
	I0407 12:58:27.381901  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 12:58:27.421104  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:27.473135  875154 logs.go:123] Gathering logs for kube-apiserver [75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb] ...
	I0407 12:58:27.473172  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb"
	I0407 12:58:27.514512  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:27.520632  875154 logs.go:123] Gathering logs for coredns [20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a] ...
	I0407 12:58:27.520677  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a"
	I0407 12:58:27.554763  875154 logs.go:123] Gathering logs for kube-controller-manager [abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199] ...
	I0407 12:58:27.554796  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199"
	I0407 12:58:27.612956  875154 logs.go:123] Gathering logs for kindnet [435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9] ...
	I0407 12:58:27.612998  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9"
	I0407 12:58:27.650737  875154 logs.go:123] Gathering logs for CRI-O ...
	I0407 12:58:27.650773  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 12:58:27.728005  875154 out.go:358] Setting ErrFile to fd 2...
	I0407 12:58:27.728055  875154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 12:58:27.728144  875154 out.go:270] X Problems detected in kubelet:
	W0407 12:58:27.728161  875154 out.go:270]   Apr 07 12:57:32 addons-665428 kubelet[1651]: W0407 12:57:32.655690    1651 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-665428" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-665428' and this object
	W0407 12:58:27.728172  875154 out.go:270]   Apr 07 12:57:32 addons-665428 kubelet[1651]: E0407 12:57:32.655741    1651 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-665428\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-665428' and this object" logger="UnhandledError"
	I0407 12:58:27.728184  875154 out.go:358] Setting ErrFile to fd 2...
	I0407 12:58:27.728189  875154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:58:27.821203  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:27.821895  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:27.920649  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:28.014712  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:28.321897  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:28.321898  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:28.420986  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:28.515420  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:28.823101  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:28.823102  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:28.920904  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:29.015744  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:29.322085  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:29.322228  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:29.420645  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:29.515067  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:29.821457  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:29.821803  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:29.922261  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:30.014605  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:30.321383  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:58:30.322310  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:30.421403  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:30.514617  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:30.821988  875154 kapi.go:107] duration metric: took 1m11.004079826s to wait for kubernetes.io/minikube-addons=registry ...
	I0407 12:58:30.822113  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:30.920668  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:31.014954  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:31.321960  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:31.421081  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:31.516110  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:31.901600  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:31.921127  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:32.016385  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:32.322517  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:32.421489  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:32.514744  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:32.823110  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:32.921177  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:33.015465  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:33.322885  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:33.422146  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:33.514593  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:33.822750  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:33.923063  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:34.023846  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:34.396761  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:34.421585  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:34.514911  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:34.911104  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:34.996339  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:35.015377  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:35.400242  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:35.499462  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:35.596134  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:35.897575  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:35.996601  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:36.015213  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:36.395128  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:36.421219  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:36.515329  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:36.823124  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:36.930168  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:37.017974  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:37.322721  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:37.422397  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:37.513961  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:37.729044  875154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:58:37.744116  875154 api_server.go:72] duration metric: took 1m24.521539709s to wait for apiserver process to appear ...
	I0407 12:58:37.744151  875154 api_server.go:88] waiting for apiserver healthz status ...
	I0407 12:58:37.744188  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 12:58:37.744238  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 12:58:37.808479  875154 cri.go:89] found id: "75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb"
	I0407 12:58:37.808504  875154 cri.go:89] found id: ""
	I0407 12:58:37.808512  875154 logs.go:282] 1 containers: [75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb]
	I0407 12:58:37.808569  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:37.812427  875154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 12:58:37.812528  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 12:58:37.823663  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:37.851530  875154 cri.go:89] found id: "ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45"
	I0407 12:58:37.851554  875154 cri.go:89] found id: ""
	I0407 12:58:37.851562  875154 logs.go:282] 1 containers: [ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45]
	I0407 12:58:37.851609  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:37.856487  875154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 12:58:37.856559  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 12:58:37.924389  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:37.932716  875154 cri.go:89] found id: "20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a"
	I0407 12:58:37.932741  875154 cri.go:89] found id: ""
	I0407 12:58:37.932749  875154 logs.go:282] 1 containers: [20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a]
	I0407 12:58:37.932813  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:37.936354  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 12:58:37.936422  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 12:58:38.011472  875154 cri.go:89] found id: "f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa"
	I0407 12:58:38.011496  875154 cri.go:89] found id: ""
	I0407 12:58:38.011506  875154 logs.go:282] 1 containers: [f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa]
	I0407 12:58:38.011562  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:38.014483  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:38.015989  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 12:58:38.016070  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 12:58:38.054370  875154 cri.go:89] found id: "f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1"
	I0407 12:58:38.054394  875154 cri.go:89] found id: ""
	I0407 12:58:38.054402  875154 logs.go:282] 1 containers: [f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1]
	I0407 12:58:38.054449  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:38.094550  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 12:58:38.094632  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 12:58:38.132969  875154 cri.go:89] found id: "abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199"
	I0407 12:58:38.132998  875154 cri.go:89] found id: ""
	I0407 12:58:38.133008  875154 logs.go:282] 1 containers: [abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199]
	I0407 12:58:38.133069  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:38.137143  875154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 12:58:38.137213  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 12:58:38.212011  875154 cri.go:89] found id: "435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9"
	I0407 12:58:38.212041  875154 cri.go:89] found id: ""
	I0407 12:58:38.212052  875154 logs.go:282] 1 containers: [435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9]
	I0407 12:58:38.212120  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:38.216557  875154 logs.go:123] Gathering logs for kubelet ...
	I0407 12:58:38.216590  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 12:58:38.278988  875154 logs.go:138] Found kubelet problem: Apr 07 12:57:32 addons-665428 kubelet[1651]: W0407 12:57:32.655690    1651 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-665428" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-665428' and this object
	W0407 12:58:38.279207  875154 logs.go:138] Found kubelet problem: Apr 07 12:57:32 addons-665428 kubelet[1651]: E0407 12:57:32.655741    1651 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-665428\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-665428' and this object" logger="UnhandledError"
	I0407 12:58:38.315214  875154 logs.go:123] Gathering logs for dmesg ...
	I0407 12:58:38.315263  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 12:58:38.323718  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:38.345750  875154 logs.go:123] Gathering logs for etcd [ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45] ...
	I0407 12:58:38.345795  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45"
	I0407 12:58:38.421873  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:38.433739  875154 logs.go:123] Gathering logs for kube-scheduler [f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa] ...
	I0407 12:58:38.433790  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa"
	I0407 12:58:38.476643  875154 logs.go:123] Gathering logs for kube-proxy [f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1] ...
	I0407 12:58:38.476684  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1"
	I0407 12:58:38.515538  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:38.533836  875154 logs.go:123] Gathering logs for kindnet [435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9] ...
	I0407 12:58:38.533867  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9"
	I0407 12:58:38.610472  875154 logs.go:123] Gathering logs for CRI-O ...
	I0407 12:58:38.610519  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 12:58:38.690606  875154 logs.go:123] Gathering logs for container status ...
	I0407 12:58:38.690664  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 12:58:38.744073  875154 logs.go:123] Gathering logs for describe nodes ...
	I0407 12:58:38.744128  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 12:58:38.823092  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:38.917411  875154 logs.go:123] Gathering logs for kube-apiserver [75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb] ...
	I0407 12:58:38.917468  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb"
	I0407 12:58:38.921045  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:39.015277  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:39.017159  875154 logs.go:123] Gathering logs for coredns [20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a] ...
	I0407 12:58:39.017189  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a"
	I0407 12:58:39.057020  875154 logs.go:123] Gathering logs for kube-controller-manager [abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199] ...
	I0407 12:58:39.057058  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199"
	I0407 12:58:39.227760  875154 out.go:358] Setting ErrFile to fd 2...
	I0407 12:58:39.227800  875154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 12:58:39.227877  875154 out.go:270] X Problems detected in kubelet:
	W0407 12:58:39.227895  875154 out.go:270]   Apr 07 12:57:32 addons-665428 kubelet[1651]: W0407 12:57:32.655690    1651 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-665428" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-665428' and this object
	W0407 12:58:39.227904  875154 out.go:270]   Apr 07 12:57:32 addons-665428 kubelet[1651]: E0407 12:57:32.655741    1651 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-665428\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-665428' and this object" logger="UnhandledError"
	I0407 12:58:39.227915  875154 out.go:358] Setting ErrFile to fd 2...
	I0407 12:58:39.227920  875154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:58:39.322498  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:39.421664  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:39.515459  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:39.822669  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:39.921966  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:40.022911  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:40.323001  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:40.421152  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:40.515943  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:40.822966  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:40.921928  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:41.014772  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:41.323722  875154 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:58:41.422053  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:41.515812  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:41.890903  875154 kapi.go:107] duration metric: took 1m22.071834645s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0407 12:58:41.921289  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:42.014656  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:42.422245  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:42.515877  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:42.920769  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:43.016110  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:43.422161  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:43.515462  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:43.921839  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:44.023169  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:44.421192  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:44.515280  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:44.921577  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:45.015028  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:45.421475  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:45.515192  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:45.921688  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:46.022999  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:46.421822  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:58:46.514811  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:46.921424  875154 kapi.go:107] duration metric: took 1m23.003622402s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0407 12:58:46.923723  875154 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-665428 cluster.
	I0407 12:58:46.925569  875154 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0407 12:58:46.927085  875154 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0407 12:58:47.022071  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:47.515410  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:48.015259  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:48.514781  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:49.015208  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:49.229142  875154 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0407 12:58:49.233477  875154 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0407 12:58:49.234606  875154 api_server.go:141] control plane version: v1.32.2
	I0407 12:58:49.234634  875154 api_server.go:131] duration metric: took 11.490476115s to wait for apiserver health ...
	I0407 12:58:49.234647  875154 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 12:58:49.234681  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 12:58:49.234743  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 12:58:49.274025  875154 cri.go:89] found id: "75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb"
	I0407 12:58:49.274057  875154 cri.go:89] found id: ""
	I0407 12:58:49.274067  875154 logs.go:282] 1 containers: [75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb]
	I0407 12:58:49.274118  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:49.278060  875154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 12:58:49.278128  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 12:58:49.326545  875154 cri.go:89] found id: "ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45"
	I0407 12:58:49.326568  875154 cri.go:89] found id: ""
	I0407 12:58:49.326579  875154 logs.go:282] 1 containers: [ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45]
	I0407 12:58:49.326636  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:49.330459  875154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 12:58:49.330535  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 12:58:49.366864  875154 cri.go:89] found id: "20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a"
	I0407 12:58:49.366893  875154 cri.go:89] found id: ""
	I0407 12:58:49.366904  875154 logs.go:282] 1 containers: [20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a]
	I0407 12:58:49.366968  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:49.370710  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 12:58:49.370780  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 12:58:49.405665  875154 cri.go:89] found id: "f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa"
	I0407 12:58:49.405703  875154 cri.go:89] found id: ""
	I0407 12:58:49.405713  875154 logs.go:282] 1 containers: [f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa]
	I0407 12:58:49.405759  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:49.409359  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 12:58:49.409444  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 12:58:49.445911  875154 cri.go:89] found id: "f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1"
	I0407 12:58:49.445939  875154 cri.go:89] found id: ""
	I0407 12:58:49.445953  875154 logs.go:282] 1 containers: [f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1]
	I0407 12:58:49.446000  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:49.449686  875154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 12:58:49.449757  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 12:58:49.484763  875154 cri.go:89] found id: "abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199"
	I0407 12:58:49.484789  875154 cri.go:89] found id: ""
	I0407 12:58:49.484800  875154 logs.go:282] 1 containers: [abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199]
	I0407 12:58:49.484861  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:49.488521  875154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 12:58:49.488594  875154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 12:58:49.515590  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:49.523685  875154 cri.go:89] found id: "435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9"
	I0407 12:58:49.523711  875154 cri.go:89] found id: ""
	I0407 12:58:49.523719  875154 logs.go:282] 1 containers: [435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9]
	I0407 12:58:49.523776  875154 ssh_runner.go:195] Run: which crictl
	I0407 12:58:49.527919  875154 logs.go:123] Gathering logs for kubelet ...
	I0407 12:58:49.527948  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 12:58:49.581450  875154 logs.go:138] Found kubelet problem: Apr 07 12:57:32 addons-665428 kubelet[1651]: W0407 12:57:32.655690    1651 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-665428" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-665428' and this object
	W0407 12:58:49.581631  875154 logs.go:138] Found kubelet problem: Apr 07 12:57:32 addons-665428 kubelet[1651]: E0407 12:57:32.655741    1651 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-665428\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-665428' and this object" logger="UnhandledError"
	I0407 12:58:49.613817  875154 logs.go:123] Gathering logs for dmesg ...
	I0407 12:58:49.613862  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 12:58:49.641697  875154 logs.go:123] Gathering logs for describe nodes ...
	I0407 12:58:49.641736  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 12:58:49.752824  875154 logs.go:123] Gathering logs for kube-proxy [f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1] ...
	I0407 12:58:49.752875  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1"
	I0407 12:58:49.829111  875154 logs.go:123] Gathering logs for kube-controller-manager [abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199] ...
	I0407 12:58:49.829156  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199"
	I0407 12:58:49.944307  875154 logs.go:123] Gathering logs for kindnet [435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9] ...
	I0407 12:58:49.944362  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9"
	I0407 12:58:50.010921  875154 logs.go:123] Gathering logs for CRI-O ...
	I0407 12:58:50.010959  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 12:58:50.015093  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:50.098463  875154 logs.go:123] Gathering logs for kube-apiserver [75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb] ...
	I0407 12:58:50.098507  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb"
	I0407 12:58:50.155528  875154 logs.go:123] Gathering logs for etcd [ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45] ...
	I0407 12:58:50.155580  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45"
	I0407 12:58:50.248510  875154 logs.go:123] Gathering logs for coredns [20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a] ...
	I0407 12:58:50.248549  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a"
	I0407 12:58:50.327820  875154 logs.go:123] Gathering logs for kube-scheduler [f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa] ...
	I0407 12:58:50.327852  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa"
	I0407 12:58:50.438426  875154 logs.go:123] Gathering logs for container status ...
	I0407 12:58:50.438469  875154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 12:58:50.516373  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:50.548302  875154 out.go:358] Setting ErrFile to fd 2...
	I0407 12:58:50.548345  875154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 12:58:50.548416  875154 out.go:270] X Problems detected in kubelet:
	W0407 12:58:50.548432  875154 out.go:270]   Apr 07 12:57:32 addons-665428 kubelet[1651]: W0407 12:57:32.655690    1651 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-665428" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-665428' and this object
	W0407 12:58:50.548445  875154 out.go:270]   Apr 07 12:57:32 addons-665428 kubelet[1651]: E0407 12:57:32.655741    1651 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-665428\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-665428' and this object" logger="UnhandledError"
	I0407 12:58:50.548455  875154 out.go:358] Setting ErrFile to fd 2...
	I0407 12:58:50.548461  875154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:58:51.015518  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:51.515330  875154 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:58:52.015557  875154 kapi.go:107] duration metric: took 1m30.504347632s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0407 12:58:52.022538  875154 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, amd-gpu-device-plugin, storage-provisioner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0407 12:58:52.025282  875154 addons.go:514] duration metric: took 1m38.802671831s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner amd-gpu-device-plugin storage-provisioner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0407 12:59:00.553923  875154 system_pods.go:59] 19 kube-system pods found
	I0407 12:59:00.553988  875154 system_pods.go:61] "amd-gpu-device-plugin-cwhmr" [5c9f65b0-5a0d-4939-8fa2-1b708daa4020] Running
	I0407 12:59:00.553996  875154 system_pods.go:61] "coredns-668d6bf9bc-8n9kz" [66cccc8d-e6a3-459a-b0e1-b8832c7c4aa5] Running
	I0407 12:59:00.554000  875154 system_pods.go:61] "csi-hostpath-attacher-0" [094c2122-1c89-48b7-b193-ad4ad477acfd] Running
	I0407 12:59:00.554004  875154 system_pods.go:61] "csi-hostpath-resizer-0" [2ffc6db2-8045-4998-ab22-aac6fc498f01] Running
	I0407 12:59:00.554008  875154 system_pods.go:61] "csi-hostpathplugin-h7x8v" [b88a0fe1-9fdf-40d1-b0de-343f8272a383] Running
	I0407 12:59:00.554011  875154 system_pods.go:61] "etcd-addons-665428" [81d3bacd-d630-452a-ba2f-649cd4b0e9b9] Running
	I0407 12:59:00.554014  875154 system_pods.go:61] "kindnet-8cf6m" [32a9b79b-a661-4b97-a2b8-108e68fa6e7d] Running
	I0407 12:59:00.554018  875154 system_pods.go:61] "kube-apiserver-addons-665428" [ec021aae-c93e-4d6c-8f04-1cf0b95ac23f] Running
	I0407 12:59:00.554021  875154 system_pods.go:61] "kube-controller-manager-addons-665428" [7dc65bbd-9126-45c8-95cc-ea79eed5cf03] Running
	I0407 12:59:00.554024  875154 system_pods.go:61] "kube-ingress-dns-minikube" [fba4be19-eaf8-4eae-9734-becfe70c6f1b] Running
	I0407 12:59:00.554028  875154 system_pods.go:61] "kube-proxy-72jdn" [df1bbd6b-7e26-4912-a8c4-3adba5f5190f] Running
	I0407 12:59:00.554031  875154 system_pods.go:61] "kube-scheduler-addons-665428" [82f0742b-64b2-4ee3-b343-c903d63af3cb] Running
	I0407 12:59:00.554038  875154 system_pods.go:61] "metrics-server-7fbb699795-5hlgg" [f2b2888f-f9f3-4735-b2bb-53236b5f80ac] Running
	I0407 12:59:00.554043  875154 system_pods.go:61] "nvidia-device-plugin-daemonset-j7hn6" [ed429b16-b5ab-41ee-b109-c010fae4423b] Running
	I0407 12:59:00.554046  875154 system_pods.go:61] "registry-6c88467877-fmmn4" [a93a9c6b-2be3-41ec-8038-5a2cc8c9b88e] Running
	I0407 12:59:00.554049  875154 system_pods.go:61] "registry-proxy-xfhzf" [e37d8690-bccf-493b-9eb0-c72781088971] Running
	I0407 12:59:00.554052  875154 system_pods.go:61] "snapshot-controller-68b874b76f-fkhtz" [902b5b05-2d4a-4987-90ff-a4e9c54f9089] Running
	I0407 12:59:00.554054  875154 system_pods.go:61] "snapshot-controller-68b874b76f-l6bpx" [a8a93f9c-7ebb-43ff-8922-57125efaff4a] Running
	I0407 12:59:00.554057  875154 system_pods.go:61] "storage-provisioner" [053fa2d5-d547-488c-8e6c-60c953fde9b5] Running
	I0407 12:59:00.554064  875154 system_pods.go:74] duration metric: took 11.319410046s to wait for pod list to return data ...
	I0407 12:59:00.554073  875154 default_sa.go:34] waiting for default service account to be created ...
	I0407 12:59:00.556408  875154 default_sa.go:45] found service account: "default"
	I0407 12:59:00.556443  875154 default_sa.go:55] duration metric: took 2.363374ms for default service account to be created ...
	I0407 12:59:00.556457  875154 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 12:59:00.560111  875154 system_pods.go:86] 19 kube-system pods found
	I0407 12:59:00.560147  875154 system_pods.go:89] "amd-gpu-device-plugin-cwhmr" [5c9f65b0-5a0d-4939-8fa2-1b708daa4020] Running
	I0407 12:59:00.560153  875154 system_pods.go:89] "coredns-668d6bf9bc-8n9kz" [66cccc8d-e6a3-459a-b0e1-b8832c7c4aa5] Running
	I0407 12:59:00.560157  875154 system_pods.go:89] "csi-hostpath-attacher-0" [094c2122-1c89-48b7-b193-ad4ad477acfd] Running
	I0407 12:59:00.560160  875154 system_pods.go:89] "csi-hostpath-resizer-0" [2ffc6db2-8045-4998-ab22-aac6fc498f01] Running
	I0407 12:59:00.560163  875154 system_pods.go:89] "csi-hostpathplugin-h7x8v" [b88a0fe1-9fdf-40d1-b0de-343f8272a383] Running
	I0407 12:59:00.560166  875154 system_pods.go:89] "etcd-addons-665428" [81d3bacd-d630-452a-ba2f-649cd4b0e9b9] Running
	I0407 12:59:00.560169  875154 system_pods.go:89] "kindnet-8cf6m" [32a9b79b-a661-4b97-a2b8-108e68fa6e7d] Running
	I0407 12:59:00.560172  875154 system_pods.go:89] "kube-apiserver-addons-665428" [ec021aae-c93e-4d6c-8f04-1cf0b95ac23f] Running
	I0407 12:59:00.560175  875154 system_pods.go:89] "kube-controller-manager-addons-665428" [7dc65bbd-9126-45c8-95cc-ea79eed5cf03] Running
	I0407 12:59:00.560178  875154 system_pods.go:89] "kube-ingress-dns-minikube" [fba4be19-eaf8-4eae-9734-becfe70c6f1b] Running
	I0407 12:59:00.560180  875154 system_pods.go:89] "kube-proxy-72jdn" [df1bbd6b-7e26-4912-a8c4-3adba5f5190f] Running
	I0407 12:59:00.560183  875154 system_pods.go:89] "kube-scheduler-addons-665428" [82f0742b-64b2-4ee3-b343-c903d63af3cb] Running
	I0407 12:59:00.560186  875154 system_pods.go:89] "metrics-server-7fbb699795-5hlgg" [f2b2888f-f9f3-4735-b2bb-53236b5f80ac] Running
	I0407 12:59:00.560191  875154 system_pods.go:89] "nvidia-device-plugin-daemonset-j7hn6" [ed429b16-b5ab-41ee-b109-c010fae4423b] Running
	I0407 12:59:00.560194  875154 system_pods.go:89] "registry-6c88467877-fmmn4" [a93a9c6b-2be3-41ec-8038-5a2cc8c9b88e] Running
	I0407 12:59:00.560197  875154 system_pods.go:89] "registry-proxy-xfhzf" [e37d8690-bccf-493b-9eb0-c72781088971] Running
	I0407 12:59:00.560204  875154 system_pods.go:89] "snapshot-controller-68b874b76f-fkhtz" [902b5b05-2d4a-4987-90ff-a4e9c54f9089] Running
	I0407 12:59:00.560208  875154 system_pods.go:89] "snapshot-controller-68b874b76f-l6bpx" [a8a93f9c-7ebb-43ff-8922-57125efaff4a] Running
	I0407 12:59:00.560211  875154 system_pods.go:89] "storage-provisioner" [053fa2d5-d547-488c-8e6c-60c953fde9b5] Running
	I0407 12:59:00.560220  875154 system_pods.go:126] duration metric: took 3.756125ms to wait for k8s-apps to be running ...
	I0407 12:59:00.560237  875154 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 12:59:00.560288  875154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:59:00.573132  875154 system_svc.go:56] duration metric: took 12.882912ms WaitForService to wait for kubelet
	I0407 12:59:00.573162  875154 kubeadm.go:582] duration metric: took 1m47.350592278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:59:00.573190  875154 node_conditions.go:102] verifying NodePressure condition ...
	I0407 12:59:00.576289  875154 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0407 12:59:00.576322  875154 node_conditions.go:123] node cpu capacity is 8
	I0407 12:59:00.576340  875154 node_conditions.go:105] duration metric: took 3.144897ms to run NodePressure ...
	I0407 12:59:00.576356  875154 start.go:241] waiting for startup goroutines ...
	I0407 12:59:00.576362  875154 start.go:246] waiting for cluster config update ...
	I0407 12:59:00.576377  875154 start.go:255] writing updated cluster config ...
	I0407 12:59:00.576722  875154 ssh_runner.go:195] Run: rm -f paused
	I0407 12:59:00.630829  875154 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 12:59:00.633000  875154 out.go:177] * Done! kubectl is now configured to use "addons-665428" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.318188408Z" level=info msg="Removing pod sandbox: d2aef8841844cf9d52b00bb7128cb54d9883d6ddead37809c57c4e4b2135da99" id=28440282-5a21-49d0-98fa-a01ed5b07ae1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.324456299Z" level=info msg="Removed pod sandbox: d2aef8841844cf9d52b00bb7128cb54d9883d6ddead37809c57c4e4b2135da99" id=28440282-5a21-49d0-98fa-a01ed5b07ae1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.324971344Z" level=info msg="Stopping pod sandbox: 817415620f6b25e524b50610aa553f7dd0cce8f1f4996525a2143bcbb98df5b9" id=0c20bc53-9186-450c-a3dd-f4288390f5a7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.325021747Z" level=info msg="Stopped pod sandbox (already stopped): 817415620f6b25e524b50610aa553f7dd0cce8f1f4996525a2143bcbb98df5b9" id=0c20bc53-9186-450c-a3dd-f4288390f5a7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.325581129Z" level=info msg="Removing pod sandbox: 817415620f6b25e524b50610aa553f7dd0cce8f1f4996525a2143bcbb98df5b9" id=6480b7ef-4176-4589-9ac6-c5962f11f6e3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.332178031Z" level=info msg="Removed pod sandbox: 817415620f6b25e524b50610aa553f7dd0cce8f1f4996525a2143bcbb98df5b9" id=6480b7ef-4176-4589-9ac6-c5962f11f6e3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.332809307Z" level=info msg="Stopping pod sandbox: 9b1a04c73d8d94c098fb5baaebf820284b47f8649276dffb1e9a2879c2e4054e" id=ed3efd9c-8134-4852-b44c-bafbceb06b7a name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.332847245Z" level=info msg="Stopped pod sandbox (already stopped): 9b1a04c73d8d94c098fb5baaebf820284b47f8649276dffb1e9a2879c2e4054e" id=ed3efd9c-8134-4852-b44c-bafbceb06b7a name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.333213859Z" level=info msg="Removing pod sandbox: 9b1a04c73d8d94c098fb5baaebf820284b47f8649276dffb1e9a2879c2e4054e" id=a72187df-2f99-4a31-8bd8-4d543a73a999 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.339693587Z" level=info msg="Removed pod sandbox: 9b1a04c73d8d94c098fb5baaebf820284b47f8649276dffb1e9a2879c2e4054e" id=a72187df-2f99-4a31-8bd8-4d543a73a999 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.340165206Z" level=info msg="Stopping pod sandbox: b95fde8fec1ede6b9bea2b47fe8a8ea67183bee387ac2e1882c9da0567ff2cbe" id=61f20fe1-2a0c-41a5-81d8-5cca3b6538f1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.340196295Z" level=info msg="Stopped pod sandbox (already stopped): b95fde8fec1ede6b9bea2b47fe8a8ea67183bee387ac2e1882c9da0567ff2cbe" id=61f20fe1-2a0c-41a5-81d8-5cca3b6538f1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.340466941Z" level=info msg="Removing pod sandbox: b95fde8fec1ede6b9bea2b47fe8a8ea67183bee387ac2e1882c9da0567ff2cbe" id=c82117ee-27fc-444d-acad-4e2adc8dba30 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 07 13:01:08 addons-665428 crio[1046]: time="2025-04-07 13:01:08.347131534Z" level=info msg="Removed pod sandbox: b95fde8fec1ede6b9bea2b47fe8a8ea67183bee387ac2e1882c9da0567ff2cbe" id=c82117ee-27fc-444d-acad-4e2adc8dba30 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.259084818Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-vtvmb/POD" id=50548090-1cb7-46d0-988b-ec945499200c name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.259173357Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.309145912Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-vtvmb Namespace:default ID:f5e0473215ee5e8b8a6e7d96ef9032a4476c045522192c0535fd13ccbc96cf12 UID:00418ba0-e5a0-4915-b5b1-e4f961e29815 NetNS:/var/run/netns/5ea354e1-6059-4d30-945f-1fa4ab5a141c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.309201896Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-vtvmb to CNI network \"kindnet\" (type=ptp)"
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.320572798Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-vtvmb Namespace:default ID:f5e0473215ee5e8b8a6e7d96ef9032a4476c045522192c0535fd13ccbc96cf12 UID:00418ba0-e5a0-4915-b5b1-e4f961e29815 NetNS:/var/run/netns/5ea354e1-6059-4d30-945f-1fa4ab5a141c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.320789806Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-vtvmb for CNI network kindnet (type=ptp)"
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.324498557Z" level=info msg="Ran pod sandbox f5e0473215ee5e8b8a6e7d96ef9032a4476c045522192c0535fd13ccbc96cf12 with infra container: default/hello-world-app-7d9564db4-vtvmb/POD" id=50548090-1cb7-46d0-988b-ec945499200c name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.326006804Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=90390f78-0eaf-4ba7-be61-20b55dcbd134 name=/runtime.v1.ImageService/ImageStatus
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.326303069Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=90390f78-0eaf-4ba7-be61-20b55dcbd134 name=/runtime.v1.ImageService/ImageStatus
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.326944422Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=283665fd-60de-4e78-883a-433ef2781010 name=/runtime.v1.ImageService/PullImage
	Apr 07 13:02:07 addons-665428 crio[1046]: time="2025-04-07 13:02:07.337732649Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d1cebcb0b2cf1       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   717ee5c4de85f       nginx
	9a9c07ca53ce6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   16292839121d8       busybox
	7217fa320f3f3       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   9ea8bb7c61c0a       ingress-nginx-controller-56d7c84fd4-zw28t
	52eef68237b75       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             3 minutes ago       Running             minikube-ingress-dns      0                   241030117e54a       kube-ingress-dns-minikube
	28fadf1c8f449       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   dd8aaff99f346       local-path-provisioner-76f89f99b5-cptvz
	011a5077ae7c7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              patch                     0                   41c1376617440       ingress-nginx-admission-patch-m6g7q
	92615f3022e14       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   9ab4c06cc61b5       ingress-nginx-admission-create-6rnpp
	20a03cdb18d9f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   46c0b7f99697b       coredns-668d6bf9bc-8n9kz
	dfdaa07a8c5ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   abe6cd88722d9       storage-provisioner
	435ec6027ff90       docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495                           4 minutes ago       Running             kindnet-cni               0                   9f3856bab500c       kindnet-8cf6m
	f2b2fe3bc458c       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago       Running             kube-proxy                0                   27b9e8797930b       kube-proxy-72jdn
	ded1756b9a37f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             5 minutes ago       Running             etcd                      0                   68418f6fa2c19       etcd-addons-665428
	f7f4e5d1c8efa       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             5 minutes ago       Running             kube-scheduler            0                   3f9806c4a2717       kube-scheduler-addons-665428
	abce39cb6102e       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             5 minutes ago       Running             kube-controller-manager   0                   5bf9d61f938e0       kube-controller-manager-addons-665428
	75130c76c57bb       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             5 minutes ago       Running             kube-apiserver            0                   1087999fc4607       kube-apiserver-addons-665428
	
	
	==> coredns [20a03cdb18d9f853f5ba06b8cdde0a4719148c76c2d7d81234acc0704f9eb21a] <==
	[INFO] 10.244.0.17:56044 - 41106 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000189184s
	[INFO] 10.244.0.17:51024 - 29597 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00450313s
	[INFO] 10.244.0.17:51024 - 30016 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005302052s
	[INFO] 10.244.0.17:57806 - 12993 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00463382s
	[INFO] 10.244.0.17:57806 - 12619 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004674705s
	[INFO] 10.244.0.17:35096 - 30649 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004389248s
	[INFO] 10.244.0.17:35096 - 30931 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00471511s
	[INFO] 10.244.0.17:52802 - 32279 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145487s
	[INFO] 10.244.0.17:52802 - 32744 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000188173s
	[INFO] 10.244.0.22:52505 - 53865 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000228777s
	[INFO] 10.244.0.22:56009 - 20354 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00034965s
	[INFO] 10.244.0.22:34441 - 15533 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110754s
	[INFO] 10.244.0.22:57257 - 45043 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129801s
	[INFO] 10.244.0.22:59135 - 31246 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128976s
	[INFO] 10.244.0.22:39578 - 29193 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018869s
	[INFO] 10.244.0.22:33955 - 26223 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007172869s
	[INFO] 10.244.0.22:42017 - 64598 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.014022093s
	[INFO] 10.244.0.22:45772 - 23969 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006842525s
	[INFO] 10.244.0.22:53933 - 14173 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.016856704s
	[INFO] 10.244.0.22:39364 - 29167 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00616241s
	[INFO] 10.244.0.22:39823 - 38683 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006834515s
	[INFO] 10.244.0.22:56263 - 36091 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002197588s
	[INFO] 10.244.0.22:35399 - 40505 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002222207s
	[INFO] 10.244.0.25:44378 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000275957s
	[INFO] 10.244.0.25:55302 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000217662s
	
	
	==> describe nodes <==
	Name:               addons-665428
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-665428
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=addons-665428
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_57_09_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-665428
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:57:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-665428
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:02:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:00:12 +0000   Mon, 07 Apr 2025 12:57:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:00:12 +0000   Mon, 07 Apr 2025 12:57:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:00:12 +0000   Mon, 07 Apr 2025 12:57:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:00:12 +0000   Mon, 07 Apr 2025 12:57:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-665428
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 a16c2e9ca5ff4a98a8ce1676fa60cf7f
	  System UUID:                f2d1d95d-998c-47b7-a8ed-5ee65ea97d1a
	  Boot ID:                    4b653518-b950-459a-a31f-13ba8513bd40
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     hello-world-app-7d9564db4-vtvmb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-zw28t    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m49s
	  kube-system                 coredns-668d6bf9bc-8n9kz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m55s
	  kube-system                 etcd-addons-665428                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m
	  kube-system                 kindnet-8cf6m                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m56s
	  kube-system                 kube-apiserver-addons-665428                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-controller-manager-addons-665428        200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-proxy-72jdn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-addons-665428                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  local-path-storage          local-path-provisioner-76f89f99b5-cptvz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m54s                kube-proxy       
	  Normal   Starting                 5m6s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node addons-665428 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node addons-665428 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m6s (x8 over 5m6s)  kubelet          Node addons-665428 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m                   kubelet          Node addons-665428 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m                   kubelet          Node addons-665428 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m                   kubelet          Node addons-665428 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m57s                node-controller  Node addons-665428 event: Registered Node addons-665428 in Controller
	  Normal   NodeReady                4m36s                kubelet          Node addons-665428 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 91 98 11 22 7f 08 06
	[  +7.680955] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e 6a 1f 16 9f 59 08 06
	[Apr 7 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 8f 1a c5 4f f3 08 06
	[  +7.784295] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 84 3c ca f2 b2 08 06
	[Apr 7 12:53] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 f9 81 cf 00 39 08 06
	[ +26.962697] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 27 f9 d1 67 5d 08 06
	[Apr 7 12:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ea 09 bd c3 8a 3e 8e c9 fa c2 f3 08 08 00
	[  +1.023023] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: ea 09 bd c3 8a 3e 8e c9 fa c2 f3 08 08 00
	[  +2.015857] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: ea 09 bd c3 8a 3e 8e c9 fa c2 f3 08 08 00
	[Apr 7 13:00] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ea 09 bd c3 8a 3e 8e c9 fa c2 f3 08 08 00
	[  +8.191526] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ea 09 bd c3 8a 3e 8e c9 fa c2 f3 08 08 00
	[ +16.127045] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ea 09 bd c3 8a 3e 8e c9 fa c2 f3 08 08 00
	[ +33.278024] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ea 09 bd c3 8a 3e 8e c9 fa c2 f3 08 08 00
	
	
	==> etcd [ded1756b9a37f8081ac6c3921fc27d0db031e420e3ca53bee5f7ba43b7b5da45] <==
	{"level":"info","ts":"2025-04-07T12:57:17.397476Z","caller":"traceutil/trace.go:171","msg":"trace[1644884311] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"101.212467ms","start":"2025-04-07T12:57:17.296239Z","end":"2025-04-07T12:57:17.397452Z","steps":["trace[1644884311] 'process raft request'  (duration: 16.90674ms)","trace[1644884311] 'compare'  (duration: 83.442181ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T12:57:17.398043Z","caller":"traceutil/trace.go:171","msg":"trace[24851056] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"101.352534ms","start":"2025-04-07T12:57:17.296673Z","end":"2025-04-07T12:57:17.398026Z","steps":["trace[24851056] 'process raft request'  (duration: 100.020839ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:57:17.398136Z","caller":"traceutil/trace.go:171","msg":"trace[1284467814] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"100.643044ms","start":"2025-04-07T12:57:17.297485Z","end":"2025-04-07T12:57:17.398128Z","steps":["trace[1284467814] 'process raft request'  (duration: 99.253823ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:57:17.398158Z","caller":"traceutil/trace.go:171","msg":"trace[2011814620] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"100.453637ms","start":"2025-04-07T12:57:17.297697Z","end":"2025-04-07T12:57:17.398151Z","steps":["trace[2011814620] 'process raft request'  (duration: 99.067931ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:57:17.398247Z","caller":"traceutil/trace.go:171","msg":"trace[732749747] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"100.434317ms","start":"2025-04-07T12:57:17.297807Z","end":"2025-04-07T12:57:17.398241Z","steps":["trace[732749747] 'process raft request'  (duration: 99.004433ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:57:17.398265Z","caller":"traceutil/trace.go:171","msg":"trace[850305508] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"100.299894ms","start":"2025-04-07T12:57:17.297961Z","end":"2025-04-07T12:57:17.398261Z","steps":["trace[850305508] 'process raft request'  (duration: 98.886227ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:57:17.398285Z","caller":"traceutil/trace.go:171","msg":"trace[1571580194] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"100.546164ms","start":"2025-04-07T12:57:17.297734Z","end":"2025-04-07T12:57:17.398280Z","steps":["trace[1571580194] 'process raft request'  (duration: 99.147869ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:57:17.398395Z","caller":"traceutil/trace.go:171","msg":"trace[159697820] linearizableReadLoop","detail":"{readStateIndex:476; appliedIndex:469; }","duration":"100.357828ms","start":"2025-04-07T12:57:17.298029Z","end":"2025-04-07T12:57:17.398387Z","steps":["trace[159697820] 'read index received'  (duration: 8.273735ms)","trace[159697820] 'applied index is now lower than readState.Index'  (duration: 92.083012ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:57:17.398462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.413871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2025-04-07T12:57:17.405941Z","caller":"traceutil/trace.go:171","msg":"trace[1855144250] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:470; }","duration":"107.928017ms","start":"2025-04-07T12:57:17.297998Z","end":"2025-04-07T12:57:17.405926Z","steps":["trace[1855144250] 'agreement among raft nodes before linearized reading'  (duration: 100.408906ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:57:17.405731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.597722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:1 size:3351"}
	{"level":"info","ts":"2025-04-07T12:57:17.406462Z","caller":"traceutil/trace.go:171","msg":"trace[1266208728] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:470; }","duration":"105.354314ms","start":"2025-04-07T12:57:17.301083Z","end":"2025-04-07T12:57:17.406437Z","steps":["trace[1266208728] 'agreement among raft nodes before linearized reading'  (duration: 104.565339ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:57:17.405784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.861058ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:57:17.406696Z","caller":"traceutil/trace.go:171","msg":"trace[841399092] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:470; }","duration":"103.752955ms","start":"2025-04-07T12:57:17.302900Z","end":"2025-04-07T12:57:17.406653Z","steps":["trace[841399092] 'agreement among raft nodes before linearized reading'  (duration: 102.866025ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:57:17.405815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.379709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-04-07T12:57:17.406991Z","caller":"traceutil/trace.go:171","msg":"trace[608274704] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:470; }","duration":"105.563786ms","start":"2025-04-07T12:57:17.301415Z","end":"2025-04-07T12:57:17.406979Z","steps":["trace[608274704] 'agreement among raft nodes before linearized reading'  (duration: 104.379366ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:57:17.405852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.419217ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-04-07T12:57:17.405880Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.599914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" limit:1 ","response":"range_response_count:1 size:5431"}
	{"level":"info","ts":"2025-04-07T12:57:17.407453Z","caller":"traceutil/trace.go:171","msg":"trace[1357946588] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:470; }","duration":"106.027487ms","start":"2025-04-07T12:57:17.301410Z","end":"2025-04-07T12:57:17.407438Z","steps":["trace[1357946588] 'agreement among raft nodes before linearized reading'  (duration: 104.423017ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:57:17.496336Z","caller":"traceutil/trace.go:171","msg":"trace[876646798] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:470; }","duration":"106.149076ms","start":"2025-04-07T12:57:17.301262Z","end":"2025-04-07T12:57:17.407411Z","steps":["trace[876646798] 'agreement among raft nodes before linearized reading'  (duration: 104.598428ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:57:17.607986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.922291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2025-04-07T12:57:17.609342Z","caller":"traceutil/trace.go:171","msg":"trace[319783848] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:472; }","duration":"108.295741ms","start":"2025-04-07T12:57:17.500993Z","end":"2025-04-07T12:57:17.609289Z","steps":["trace[319783848] 'get authentication metadata'  (duration: 105.015629ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T13:00:27.506041Z","caller":"traceutil/trace.go:171","msg":"trace[1327077382] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1883; }","duration":"108.996085ms","start":"2025-04-07T13:00:27.397015Z","end":"2025-04-07T13:00:27.506011Z","steps":["trace[1327077382] 'process raft request'  (duration: 47.868948ms)","trace[1327077382] 'compare'  (duration: 60.888288ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T13:00:27.506088Z","caller":"traceutil/trace.go:171","msg":"trace[1029335348] transaction","detail":"{read_only:false; response_revision:1884; number_of_response:1; }","duration":"108.980457ms","start":"2025-04-07T13:00:27.397083Z","end":"2025-04-07T13:00:27.506064Z","steps":["trace[1029335348] 'process raft request'  (duration: 108.789628ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T13:00:27.697817Z","caller":"traceutil/trace.go:171","msg":"trace[75807417] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1889; }","duration":"110.440552ms","start":"2025-04-07T13:00:27.587349Z","end":"2025-04-07T13:00:27.697790Z","steps":["trace[75807417] 'process raft request'  (duration: 62.2441ms)","trace[75807417] 'compare'  (duration: 48.048812ms)"],"step_count":2}
	
	
	==> kernel <==
	 13:02:08 up  4:44,  0 users,  load average: 0.33, 1.68, 2.39
	Linux addons-665428 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [435ec6027ff900db416627ae8d8c4bb8e1c3b8e62366ca3dbccfe3c8e946adf9] <==
	I0407 13:00:02.121467       1 main.go:301] handling current node
	I0407 13:00:12.118595       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:00:12.118637       1 main.go:301] handling current node
	I0407 13:00:22.118261       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:00:22.118313       1 main.go:301] handling current node
	I0407 13:00:32.119183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:00:32.119228       1 main.go:301] handling current node
	I0407 13:00:42.125416       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:00:42.125455       1 main.go:301] handling current node
	I0407 13:00:52.125520       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:00:52.125558       1 main.go:301] handling current node
	I0407 13:01:02.121432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:01:02.121473       1 main.go:301] handling current node
	I0407 13:01:12.125389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:01:12.125427       1 main.go:301] handling current node
	I0407 13:01:22.118257       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:01:22.118303       1 main.go:301] handling current node
	I0407 13:01:32.125858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:01:32.125899       1 main.go:301] handling current node
	I0407 13:01:42.125435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:01:42.125475       1 main.go:301] handling current node
	I0407 13:01:52.119125       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:01:52.119166       1 main.go:301] handling current node
	I0407 13:02:02.125420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0407 13:02:02.125463       1 main.go:301] handling current node
	
	
	==> kube-apiserver [75130c76c57bbf684ca27a760a6e6e8525879978e665a1f3f58f5dc0234592cb] <==
	 > logger="UnhandledError"
	I0407 12:57:49.448748       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0407 12:59:10.340548       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42832: use of closed network connection
	E0407 12:59:10.524326       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42854: use of closed network connection
	I0407 12:59:31.449514       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.53.163"}
	I0407 12:59:43.998448       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0407 12:59:44.183499       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.194.243"}
	I0407 12:59:47.911052       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0407 12:59:49.103216       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0407 12:59:50.499226       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0407 13:00:01.517980       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0407 13:00:26.032520       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 13:00:26.032582       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 13:00:26.047768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 13:00:26.047943       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 13:00:26.049519       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 13:00:26.049562       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 13:00:26.065816       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 13:00:26.065873       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 13:00:26.110445       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 13:00:26.110491       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0407 13:00:27.093739       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0407 13:00:27.112227       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0407 13:00:27.208436       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0407 13:02:07.119470       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.6.149"}
	
	
	==> kube-controller-manager [abce39cb6102e50074dd43fb84c83720779016b8edd6a8426e2873fe98b75199] <==
	E0407 13:01:05.747274       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:01:09.198419       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:01:09.199455       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0407 13:01:09.200342       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:01:09.200397       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:01:36.547423       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:01:36.548547       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0407 13:01:36.549452       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:01:36.549489       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:01:45.701974       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:01:45.703040       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0407 13:01:45.704013       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:01:45.704056       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:01:47.307955       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:01:47.308930       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0407 13:01:47.309856       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:01:47.309894       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:01:48.781808       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:01:48.782841       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0407 13:01:48.783721       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:01:48.783760       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0407 13:02:06.957135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="16.586709ms"
	I0407 13:02:06.962964       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="5.685464ms"
	I0407 13:02:06.963056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="45.599µs"
	I0407 13:02:06.965889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="53.088µs"
	
	
	==> kube-proxy [f2b2fe3bc458c61e59406ac2a8f07e879031cb3158521fde0c03fb164db116c1] <==
	I0407 12:57:13.199205       1 server_linux.go:66] "Using iptables proxy"
	I0407 12:57:13.512219       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0407 12:57:13.512407       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:57:13.898939       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0407 12:57:13.899011       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:57:13.915837       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:57:13.993928       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:57:13.994513       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:57:14.015630       1 config.go:199] "Starting service config controller"
	I0407 12:57:14.015686       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:57:14.015727       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:57:14.015732       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:57:14.016376       1 config.go:329] "Starting node config controller"
	I0407 12:57:14.016397       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:57:14.119030       1 shared_informer.go:320] Caches are synced for node config
	I0407 12:57:14.119065       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:57:14.119077       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7f4e5d1c8efa10a24e409de72cc8eb734006214b8b6d1a4bf154af2050f08fa] <==
	W0407 12:57:05.825082       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0407 12:57:05.825082       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0407 12:57:05.825103       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0407 12:57:05.825107       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:57:05.825276       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0407 12:57:05.825283       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:57:05.825327       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0407 12:57:05.825329       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:57:05.825464       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 12:57:05.825494       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:57:05.825785       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0407 12:57:05.825800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0407 12:57:05.825798       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 12:57:05.825821       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0407 12:57:05.825825       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0407 12:57:05.825829       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 12:57:05.825871       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:57:05.825899       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 12:57:05.825980       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 12:57:05.826005       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:57:05.826013       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0407 12:57:05.825981       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 12:57:05.826044       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0407 12:57:05.826044       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0407 12:57:07.322914       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 13:02:06 addons-665428 kubelet[1651]: I0407 13:02:06.956680    1651 memory_manager.go:355] "RemoveStaleState removing state" podUID="b88a0fe1-9fdf-40d1-b0de-343f8272a383" containerName="csi-snapshotter"
	Apr 07 13:02:06 addons-665428 kubelet[1651]: I0407 13:02:06.956688    1651 memory_manager.go:355] "RemoveStaleState removing state" podUID="094c2122-1c89-48b7-b193-ad4ad477acfd" containerName="csi-attacher"
	Apr 07 13:02:06 addons-665428 kubelet[1651]: I0407 13:02:06.956696    1651 memory_manager.go:355] "RemoveStaleState removing state" podUID="66a88fb9-7375-46e1-90bc-9b40921270b2" containerName="task-pv-container"
	Apr 07 13:02:06 addons-665428 kubelet[1651]: I0407 13:02:06.956706    1651 memory_manager.go:355] "RemoveStaleState removing state" podUID="b88a0fe1-9fdf-40d1-b0de-343f8272a383" containerName="csi-provisioner"
	Apr 07 13:02:06 addons-665428 kubelet[1651]: I0407 13:02:06.956716    1651 memory_manager.go:355] "RemoveStaleState removing state" podUID="b88a0fe1-9fdf-40d1-b0de-343f8272a383" containerName="node-driver-registrar"
	Apr 07 13:02:06 addons-665428 kubelet[1651]: I0407 13:02:06.956724    1651 memory_manager.go:355] "RemoveStaleState removing state" podUID="a8a93f9c-7ebb-43ff-8922-57125efaff4a" containerName="volume-snapshot-controller"
	Apr 07 13:02:06 addons-665428 kubelet[1651]: I0407 13:02:06.956731    1651 memory_manager.go:355] "RemoveStaleState removing state" podUID="b88a0fe1-9fdf-40d1-b0de-343f8272a383" containerName="csi-external-health-monitor-controller"
	Apr 07 13:02:06 addons-665428 kubelet[1651]: I0407 13:02:06.956739    1651 memory_manager.go:355] "RemoveStaleState removing state" podUID="2ffc6db2-8045-4998-ab22-aac6fc498f01" containerName="csi-resizer"
	Apr 07 13:02:06 addons-665428 kubelet[1651]: I0407 13:02:06.975126    1651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffwg9\" (UniqueName: \"kubernetes.io/projected/00418ba0-e5a0-4915-b5b1-e4f961e29815-kube-api-access-ffwg9\") pod \"hello-world-app-7d9564db4-vtvmb\" (UID: \"00418ba0-e5a0-4915-b5b1-e4f961e29815\") " pod="default/hello-world-app-7d9564db4-vtvmb"
	Apr 07 13:02:07 addons-665428 kubelet[1651]: W0407 13:02:07.323166    1651 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02/crio-f5e0473215ee5e8b8a6e7d96ef9032a4476c045522192c0535fd13ccbc96cf12 WatchSource:0}: Error finding container f5e0473215ee5e8b8a6e7d96ef9032a4476c045522192c0535fd13ccbc96cf12: Status 404 returned error can't find the container with id f5e0473215ee5e8b8a6e7d96ef9032a4476c045522192c0535fd13ccbc96cf12
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.133520    1651 container_manager_linux.go:516] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02, memory: /docker/06c16868ee72c95149a526467d370aa659c5268bcb174a0228f3f554f9e08c02/system.slice/kubelet.service"
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.143354    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/de7e64e01fb951eb827ea308315391a7df1de6e508bc8b98af3d3f0dedd8377e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/de7e64e01fb951eb827ea308315391a7df1de6e508bc8b98af3d3f0dedd8377e/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.143411    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/de7e64e01fb951eb827ea308315391a7df1de6e508bc8b98af3d3f0dedd8377e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/de7e64e01fb951eb827ea308315391a7df1de6e508bc8b98af3d3f0dedd8377e/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.151753    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d2044391d2ca72e0306f1cefb9a6defd21e8546dae61e4c5695a05232e07f27e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d2044391d2ca72e0306f1cefb9a6defd21e8546dae61e4c5695a05232e07f27e/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.161799    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/41c39da6c850580bdfc0fe78543c6ce548d2cd0cbf0797b3a3fda9e1c3699419/diff" to get inode usage: stat /var/lib/containers/storage/overlay/41c39da6c850580bdfc0fe78543c6ce548d2cd0cbf0797b3a3fda9e1c3699419/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.163083    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bc496f589732d142007ed82193fcc2fa19028d0f2534d2f3cbcef25384788a05/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bc496f589732d142007ed82193fcc2fa19028d0f2534d2f3cbcef25384788a05/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.213486    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/41c39da6c850580bdfc0fe78543c6ce548d2cd0cbf0797b3a3fda9e1c3699419/diff" to get inode usage: stat /var/lib/containers/storage/overlay/41c39da6c850580bdfc0fe78543c6ce548d2cd0cbf0797b3a3fda9e1c3699419/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.215686    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/38c5c73dbd7d4cc0a73fe119d22f62454ace3e4e687bf864ec9f2a2366b0f64e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/38c5c73dbd7d4cc0a73fe119d22f62454ace3e4e687bf864ec9f2a2366b0f64e/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.217923    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a9fc0e96f445ab71a73ddf7d702649538b97465ee7e5d8c983e30788560c5271/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a9fc0e96f445ab71a73ddf7d702649538b97465ee7e5d8c983e30788560c5271/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.221203    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/38c5c73dbd7d4cc0a73fe119d22f62454ace3e4e687bf864ec9f2a2366b0f64e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/38c5c73dbd7d4cc0a73fe119d22f62454ace3e4e687bf864ec9f2a2366b0f64e/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.223425    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a9fc0e96f445ab71a73ddf7d702649538b97465ee7e5d8c983e30788560c5271/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a9fc0e96f445ab71a73ddf7d702649538b97465ee7e5d8c983e30788560c5271/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.225669    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bc496f589732d142007ed82193fcc2fa19028d0f2534d2f3cbcef25384788a05/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bc496f589732d142007ed82193fcc2fa19028d0f2534d2f3cbcef25384788a05/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.226779    1651 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d2044391d2ca72e0306f1cefb9a6defd21e8546dae61e4c5695a05232e07f27e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d2044391d2ca72e0306f1cefb9a6defd21e8546dae61e4c5695a05232e07f27e/diff: no such file or directory, extraDiskErr: <nil>
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.261206    1651 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030928260990166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 13:02:08 addons-665428 kubelet[1651]: E0407 13:02:08.261240    1651 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030928260990166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [dfdaa07a8c5eeaa1f83f0e799310bba309a6c48ddd3db3b436707860055dd417] <==
	I0407 12:57:33.334959       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:57:33.344254       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:57:33.344310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:57:33.351997       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:57:33.352188       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-665428_2270833b-0119-4261-888a-d1d9aa4dd29b!
	I0407 12:57:33.352145       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f022fe4-956d-48c4-8394-7a9212b7efa1", APIVersion:"v1", ResourceVersion:"913", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-665428_2270833b-0119-4261-888a-d1d9aa4dd29b became leader
	I0407 12:57:33.452384       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-665428_2270833b-0119-4261-888a-d1d9aa4dd29b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-665428 -n addons-665428
helpers_test.go:261: (dbg) Run:  kubectl --context addons-665428 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-vtvmb ingress-nginx-admission-create-6rnpp ingress-nginx-admission-patch-m6g7q
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-665428 describe pod hello-world-app-7d9564db4-vtvmb ingress-nginx-admission-create-6rnpp ingress-nginx-admission-patch-m6g7q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-665428 describe pod hello-world-app-7d9564db4-vtvmb ingress-nginx-admission-create-6rnpp ingress-nginx-admission-patch-m6g7q: exit status 1 (76.950819ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-vtvmb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-665428/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 13:02:06 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ffwg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ffwg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-vtvmb to addons-665428
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6rnpp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-m6g7q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-665428 describe pod hello-world-app-7d9564db4-vtvmb ingress-nginx-admission-create-6rnpp ingress-nginx-admission-patch-m6g7q: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-665428 addons disable ingress-dns --alsologtostderr -v=1: (1.02934624s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-665428 addons disable ingress --alsologtostderr -v=1: (7.767300077s)
--- FAIL: TestAddons/parallel/Ingress (154.66s)

                                                
                                    

Test pass (303/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.35
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 13.37
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.07
18 TestDownloadOnly/v1.32.2/DeleteAll 0.23
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.15
21 TestBinaryMirror 0.84
22 TestOffline 60.03
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 149.99
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 16.69
37 TestAddons/parallel/InspektorGadget 12.25
38 TestAddons/parallel/MetricsServer 5.75
40 TestAddons/parallel/CSI 62
41 TestAddons/parallel/Headlamp 19.6
42 TestAddons/parallel/CloudSpanner 5.53
43 TestAddons/parallel/LocalPath 18.22
44 TestAddons/parallel/NvidiaDevicePlugin 6.67
45 TestAddons/parallel/Yakd 11.76
46 TestAddons/parallel/AmdGpuDevicePlugin 6.68
47 TestAddons/StoppedEnableDisable 12.2
48 TestCertOptions 26.62
49 TestCertExpiration 234.92
51 TestForceSystemdFlag 29.07
52 TestForceSystemdEnv 29.11
54 TestKVMDriverInstallOrUpdate 4.84
58 TestErrorSpam/setup 21.54
59 TestErrorSpam/start 0.61
60 TestErrorSpam/status 0.89
61 TestErrorSpam/pause 1.63
62 TestErrorSpam/unpause 1.65
63 TestErrorSpam/stop 1.48
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 44.97
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 31.59
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
75 TestFunctional/serial/CacheCmd/cache/add_local 2.18
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 33.3
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4.23
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 12.96
91 TestFunctional/parallel/DryRun 0.37
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.03
97 TestFunctional/parallel/ServiceCmdConnect 14.57
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 48.35
101 TestFunctional/parallel/SSHCmd 0.59
102 TestFunctional/parallel/CpCmd 1.78
103 TestFunctional/parallel/MySQL 31.59
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.67
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
113 TestFunctional/parallel/License 0.65
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.74
119 TestFunctional/parallel/ImageCommands/Setup 2.06
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.22
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.25
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.91
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.89
129 TestFunctional/parallel/ImageCommands/ImageRemove 1.19
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.1
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
132 TestFunctional/parallel/ServiceCmd/DeployApp 7.16
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
140 TestFunctional/parallel/ProfileCmd/profile_list 0.37
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
142 TestFunctional/parallel/MountCmd/any-port 9.89
143 TestFunctional/parallel/ServiceCmd/List 0.55
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
146 TestFunctional/parallel/ServiceCmd/Format 0.35
147 TestFunctional/parallel/ServiceCmd/URL 0.34
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
151 TestFunctional/parallel/MountCmd/specific-port 1.94
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.16
153 TestFunctional/parallel/Version/short 0.06
154 TestFunctional/parallel/Version/components 0.5
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 99.84
163 TestMultiControlPlane/serial/DeployApp 6.96
164 TestMultiControlPlane/serial/PingHostFromPods 1.1
165 TestMultiControlPlane/serial/AddWorkerNode 37.39
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
168 TestMultiControlPlane/serial/CopyFile 16.18
169 TestMultiControlPlane/serial/StopSecondaryNode 12.53
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 24.52
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.1
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 156.57
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.52
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
176 TestMultiControlPlane/serial/StopCluster 35.73
177 TestMultiControlPlane/serial/RestartCluster 95.54
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 40.8
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
184 TestJSONOutput/start/Command 42.31
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.7
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.61
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.8
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.22
209 TestKicCustomNetwork/create_custom_network 36.33
210 TestKicCustomNetwork/use_default_bridge_network 26.34
211 TestKicExistingNetwork 23.83
212 TestKicCustomSubnet 24.97
213 TestKicStaticIP 25.46
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 51
218 TestMountStart/serial/StartWithMountFirst 6.04
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 6.98
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.62
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.19
225 TestMountStart/serial/RestartStopped 7.82
226 TestMountStart/serial/VerifyMountPostStop 0.25
229 TestMultiNode/serial/FreshStart2Nodes 77.3
230 TestMultiNode/serial/DeployApp2Nodes 5.57
231 TestMultiNode/serial/PingHostFrom2Pods 0.79
232 TestMultiNode/serial/AddNode 33.22
233 TestMultiNode/serial/MultiNodeLabels 0.07
234 TestMultiNode/serial/ProfileList 0.64
235 TestMultiNode/serial/CopyFile 9.5
236 TestMultiNode/serial/StopNode 2.15
237 TestMultiNode/serial/StartAfterStop 9.11
238 TestMultiNode/serial/RestartKeepsNodes 84.09
239 TestMultiNode/serial/DeleteNode 5
240 TestMultiNode/serial/StopMultiNode 23.83
241 TestMultiNode/serial/RestartMultiNode 45.12
242 TestMultiNode/serial/ValidateNameConflict 25.99
247 TestPreload 116.42
249 TestScheduledStopUnix 97.5
252 TestInsufficientStorage 10.38
253 TestRunningBinaryUpgrade 54.29
255 TestKubernetesUpgrade 329.31
256 TestMissingContainerUpgrade 158.04
257 TestStoppedBinaryUpgrade/Setup 2.67
258 TestStoppedBinaryUpgrade/Upgrade 132.76
259 TestStoppedBinaryUpgrade/MinikubeLogs 1.33
274 TestNetworkPlugins/group/false 3.48
279 TestPause/serial/Start 45.32
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
282 TestNoKubernetes/serial/StartWithK8s 22.09
283 TestPause/serial/SecondStartNoReconfiguration 36.68
284 TestNoKubernetes/serial/StartWithStopK8s 6.33
285 TestNoKubernetes/serial/Start 5.51
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
287 TestNoKubernetes/serial/ProfileList 16.79
288 TestPause/serial/Pause 0.77
289 TestPause/serial/VerifyStatus 0.33
290 TestPause/serial/Unpause 0.68
291 TestNoKubernetes/serial/Stop 1.21
292 TestPause/serial/PauseAgain 0.78
293 TestPause/serial/DeletePaused 2.72
294 TestNoKubernetes/serial/StartNoArgs 9.76
295 TestPause/serial/VerifyDeletedResources 0.72
297 TestStartStop/group/old-k8s-version/serial/FirstStart 140.3
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
300 TestStartStop/group/no-preload/serial/FirstStart 62.55
302 TestStartStop/group/embed-certs/serial/FirstStart 48.26
303 TestStartStop/group/no-preload/serial/DeployApp 12.31
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
305 TestStartStop/group/no-preload/serial/Stop 12.07
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.75
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
309 TestStartStop/group/no-preload/serial/SecondStart 263.78
310 TestStartStop/group/embed-certs/serial/DeployApp 10.3
311 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
312 TestStartStop/group/embed-certs/serial/Stop 12.08
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.31
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/embed-certs/serial/SecondStart 297.49
316 TestStartStop/group/old-k8s-version/serial/DeployApp 11.41
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.8
319 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.85
320 TestStartStop/group/old-k8s-version/serial/Stop 12.01
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.5
323 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
324 TestStartStop/group/old-k8s-version/serial/SecondStart 131.05
325 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
327 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
328 TestStartStop/group/old-k8s-version/serial/Pause 2.65
330 TestStartStop/group/newest-cni/serial/FirstStart 27.89
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
333 TestStartStop/group/newest-cni/serial/Stop 1.2
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
335 TestStartStop/group/newest-cni/serial/SecondStart 13.25
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
339 TestStartStop/group/newest-cni/serial/Pause 2.96
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
341 TestNetworkPlugins/group/auto/Start 43.24
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
344 TestStartStop/group/no-preload/serial/Pause 2.88
345 TestNetworkPlugins/group/flannel/Start 46.89
346 TestNetworkPlugins/group/auto/KubeletFlags 0.32
347 TestNetworkPlugins/group/auto/NetCatPod 10.23
348 TestNetworkPlugins/group/auto/DNS 0.13
349 TestNetworkPlugins/group/auto/Localhost 0.11
350 TestNetworkPlugins/group/auto/HairPin 0.11
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
352 TestNetworkPlugins/group/flannel/ControllerPod 6.01
353 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
354 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
357 TestNetworkPlugins/group/flannel/NetCatPod 11.22
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.46
359 TestNetworkPlugins/group/calico/Start 64.04
360 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
361 TestNetworkPlugins/group/custom-flannel/Start 56.56
362 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
363 TestNetworkPlugins/group/flannel/DNS 0.15
364 TestStartStop/group/embed-certs/serial/Pause 3.27
365 TestNetworkPlugins/group/flannel/Localhost 0.14
366 TestNetworkPlugins/group/flannel/HairPin 0.13
367 TestNetworkPlugins/group/kindnet/Start 49.15
368 TestNetworkPlugins/group/bridge/Start 40.01
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
371 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
374 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
375 TestNetworkPlugins/group/custom-flannel/DNS 0.14
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
378 TestNetworkPlugins/group/calico/KubeletFlags 0.33
379 TestNetworkPlugins/group/calico/NetCatPod 10.33
380 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
381 TestNetworkPlugins/group/bridge/NetCatPod 10.23
382 TestNetworkPlugins/group/kindnet/DNS 0.15
383 TestNetworkPlugins/group/kindnet/Localhost 0.13
384 TestNetworkPlugins/group/kindnet/HairPin 0.12
385 TestNetworkPlugins/group/calico/DNS 0.15
386 TestNetworkPlugins/group/calico/Localhost 0.12
387 TestNetworkPlugins/group/calico/HairPin 0.13
388 TestNetworkPlugins/group/bridge/DNS 0.17
389 TestNetworkPlugins/group/bridge/Localhost 0.13
390 TestNetworkPlugins/group/bridge/HairPin 0.14
391 TestNetworkPlugins/group/enable-default-cni/Start 38.55
392 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
393 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.18
394 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
395 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
396 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (15.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-864324 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-864324 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.350561068s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0407 12:56:14.066018  873820 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0407 12:56:14.066141  873820 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-864324
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-864324: exit status 85 (70.107262ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-864324 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC |          |
	|         | -p download-only-864324        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:55:58
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:55:58.759799  873832 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:55:58.759906  873832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:55:58.759912  873832 out.go:358] Setting ErrFile to fd 2...
	I0407 12:55:58.759916  873832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:55:58.760118  873832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	W0407 12:55:58.760238  873832 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20598-866963/.minikube/config/config.json: open /home/jenkins/minikube-integration/20598-866963/.minikube/config/config.json: no such file or directory
	I0407 12:55:58.760865  873832 out.go:352] Setting JSON to true
	I0407 12:55:58.761984  873832 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":16702,"bootTime":1744013857,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:55:58.762131  873832 start.go:139] virtualization: kvm guest
	I0407 12:55:58.764603  873832 out.go:97] [download-only-864324] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 12:55:58.764750  873832 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:55:58.764791  873832 notify.go:220] Checking for updates...
	I0407 12:55:58.766287  873832 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:55:58.767895  873832 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:55:58.769572  873832 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	I0407 12:55:58.771072  873832 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	I0407 12:55:58.772448  873832 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:55:58.774815  873832 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:55:58.775108  873832 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:55:58.797793  873832 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:55:58.797913  873832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:55:58.845971  873832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:55 SystemTime:2025-04-07 12:55:58.836570973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:55:58.846072  873832 docker.go:318] overlay module found
	I0407 12:55:58.848018  873832 out.go:97] Using the docker driver based on user configuration
	I0407 12:55:58.848051  873832 start.go:297] selected driver: docker
	I0407 12:55:58.848057  873832 start.go:901] validating driver "docker" against <nil>
	I0407 12:55:58.848137  873832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:55:58.895905  873832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:55 SystemTime:2025-04-07 12:55:58.88666651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:55:58.896095  873832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:55:58.896590  873832 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0407 12:55:58.896753  873832 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:55:58.898555  873832 out.go:169] Using Docker driver with root privileges
	I0407 12:55:58.899726  873832 cni.go:84] Creating CNI manager for ""
	I0407 12:55:58.899805  873832 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0407 12:55:58.899823  873832 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0407 12:55:58.899912  873832 start.go:340] cluster config:
	{Name:download-only-864324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-864324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:55:58.901237  873832 out.go:97] Starting "download-only-864324" primary control-plane node in "download-only-864324" cluster
	I0407 12:55:58.901258  873832 cache.go:121] Beginning downloading kic base image for docker with crio
	I0407 12:55:58.902346  873832 out.go:97] Pulling base image v0.0.46-1743675393-20591 ...
	I0407 12:55:58.902377  873832 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 12:55:58.902472  873832 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 12:55:58.919039  873832 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:55:58.919206  873832 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory
	I0407 12:55:58.919289  873832 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:55:59.421964  873832 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 12:55:59.422008  873832 cache.go:56] Caching tarball of preloaded images
	I0407 12:55:59.422204  873832 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 12:55:59.424132  873832 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0407 12:55:59.424158  873832 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:55:59.537157  873832 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 12:56:04.120695  873832 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 as a tarball
	I0407 12:56:12.134311  873832 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:56:12.134431  873832 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:56:13.077255  873832 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0407 12:56:13.077700  873832 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/download-only-864324/config.json ...
	I0407 12:56:13.077741  873832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/download-only-864324/config.json: {Name:mk0bc369b2c7961dff89e41c5199055e6233db67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:13.077920  873832 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 12:56:13.078117  873832 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20598-866963/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-864324 host does not exist
	  To start a cluster, run: "minikube start -p download-only-864324"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-864324
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (13.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-156445 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-156445 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.374596977s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (13.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0407 12:56:27.881280  873820 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0407 12:56:27.881378  873820 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-156445
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-156445: exit status 85 (69.009891ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-864324 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC |                     |
	|         | -p download-only-864324        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:56 UTC |
	| delete  | -p download-only-864324        | download-only-864324 | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:56 UTC |
	| start   | -o=json --download-only        | download-only-156445 | jenkins | v1.35.0 | 07 Apr 25 12:56 UTC |                     |
	|         | -p download-only-156445        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:56:14
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:56:14.554825  874199 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:56:14.555148  874199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:14.555160  874199 out.go:358] Setting ErrFile to fd 2...
	I0407 12:56:14.555164  874199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:14.555347  874199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	I0407 12:56:14.555975  874199 out.go:352] Setting JSON to true
	I0407 12:56:14.557103  874199 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":16718,"bootTime":1744013857,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:56:14.557233  874199 start.go:139] virtualization: kvm guest
	I0407 12:56:14.559371  874199 out.go:97] [download-only-156445] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:56:14.559545  874199 notify.go:220] Checking for updates...
	I0407 12:56:14.561032  874199 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:56:14.562622  874199 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:56:14.564301  874199 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	I0407 12:56:14.565720  874199 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	I0407 12:56:14.567164  874199 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:56:14.570020  874199 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:56:14.570347  874199 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:56:14.594740  874199 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:56:14.594931  874199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:56:14.648904  874199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2025-04-07 12:56:14.639408357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:56:14.649014  874199 docker.go:318] overlay module found
	I0407 12:56:14.650921  874199 out.go:97] Using the docker driver based on user configuration
	I0407 12:56:14.650961  874199 start.go:297] selected driver: docker
	I0407 12:56:14.650967  874199 start.go:901] validating driver "docker" against <nil>
	I0407 12:56:14.651057  874199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:56:14.699673  874199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2025-04-07 12:56:14.690675861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:56:14.699869  874199 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:56:14.700487  874199 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0407 12:56:14.700660  874199 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:56:14.703115  874199 out.go:169] Using Docker driver with root privileges
	I0407 12:56:14.704914  874199 cni.go:84] Creating CNI manager for ""
	I0407 12:56:14.704997  874199 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0407 12:56:14.705009  874199 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0407 12:56:14.705094  874199 start.go:340] cluster config:
	{Name:download-only-156445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-156445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:56:14.706866  874199 out.go:97] Starting "download-only-156445" primary control-plane node in "download-only-156445" cluster
	I0407 12:56:14.706895  874199 cache.go:121] Beginning downloading kic base image for docker with crio
	I0407 12:56:14.708274  874199 out.go:97] Pulling base image v0.0.46-1743675393-20591 ...
	I0407 12:56:14.708314  874199 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:56:14.708392  874199 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 12:56:14.726050  874199 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:56:14.726230  874199 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory
	I0407 12:56:14.726257  874199 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory, skipping pull
	I0407 12:56:14.726264  874199 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in cache, skipping pull
	I0407 12:56:14.726278  874199 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 as a tarball
	I0407 12:56:15.223764  874199 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 12:56:15.223820  874199 cache.go:56] Caching tarball of preloaded images
	I0407 12:56:15.223993  874199 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:56:15.226186  874199 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0407 12:56:15.226220  874199 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:56:15.334571  874199 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20598-866963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-156445 host does not exist
	  To start a cluster, run: "minikube start -p download-only-156445"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-156445
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.15s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-988404 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-988404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-988404
--- PASS: TestDownloadOnlyKic (1.15s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:56:29.752209  873820 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-720301 --alsologtostderr --binary-mirror http://127.0.0.1:32847 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-720301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-720301
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (60.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-094314 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-094314 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (55.779233426s)
helpers_test.go:175: Cleaning up "offline-crio-094314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-094314
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-094314: (4.249298647s)
--- PASS: TestOffline (60.03s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-665428
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-665428: exit status 85 (60.677566ms)

                                                
                                                
-- stdout --
	* Profile "addons-665428" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-665428"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-665428
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-665428: exit status 85 (61.717444ms)

                                                
                                                
-- stdout --
	* Profile "addons-665428" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-665428"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (149.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-665428 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-665428 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m29.991228266s)
--- PASS: TestAddons/Setup (149.99s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-665428 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-665428 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-665428 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-665428 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d3c19c3e-fefe-48f8-bff7-68427538b9a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d3c19c3e-fefe-48f8-bff7-68427538b9a9] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003665334s
addons_test.go:633: (dbg) Run:  kubectl --context addons-665428 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-665428 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-665428 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.932712ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-fmmn4" [a93a9c6b-2be3-41ec-8038-5a2cc8c9b88e] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002239337s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xfhzf" [e37d8690-bccf-493b-9eb0-c72781088971] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00324786s
addons_test.go:331: (dbg) Run:  kubectl --context addons-665428 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-665428 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-665428 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.093912697s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 ip
2025/04/07 12:59:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-665428 addons disable registry --alsologtostderr -v=1: (1.384114907s)
--- PASS: TestAddons/parallel/Registry (16.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xrzdl" [e89ef082-9699-4528-984e-1f9ff9ad9dfd] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003577969s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-665428 addons disable inspektor-gadget --alsologtostderr -v=1: (6.249365242s)
--- PASS: TestAddons/parallel/InspektorGadget (12.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 70.258926ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-5hlgg" [f2b2888f-f9f3-4735-b2bb-53236b5f80ac] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.02529589s
addons_test.go:402: (dbg) Run:  kubectl --context addons-665428 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0407 12:59:31.044047  873820 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:59:31.093934  873820 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:59:31.093976  873820 kapi.go:107] duration metric: took 49.941834ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 49.956964ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-665428 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-665428 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [029ce395-f383-4ca4-a9ea-7edcd73a0b9e] Pending
helpers_test.go:344: "task-pv-pod" [029ce395-f383-4ca4-a9ea-7edcd73a0b9e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [029ce395-f383-4ca4-a9ea-7edcd73a0b9e] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004751717s
addons_test.go:511: (dbg) Run:  kubectl --context addons-665428 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-665428 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-665428 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-665428 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-665428 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-665428 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-665428 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [66a88fb9-7375-46e1-90bc-9b40921270b2] Pending
helpers_test.go:344: "task-pv-pod-restore" [66a88fb9-7375-46e1-90bc-9b40921270b2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [66a88fb9-7375-46e1-90bc-9b40921270b2] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003895969s
addons_test.go:553: (dbg) Run:  kubectl --context addons-665428 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-665428 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-665428 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-665428 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.629653772s)
--- PASS: TestAddons/parallel/CSI (62.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-665428 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-dqs2h" [7ae54022-64c0-4638-8696-5cdf7fc84d5d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-dqs2h" [7ae54022-64c0-4638-8696-5cdf7fc84d5d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004039394s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-665428 addons disable headlamp --alsologtostderr -v=1: (5.678384062s)
--- PASS: TestAddons/parallel/Headlamp (19.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-5b4zl" [b879475a-676b-4e22-8fff-591d6cd5a68e] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003197319s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (18.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-665428 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-665428 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-665428 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [04bf9133-a563-45be-aaa0-d7d70c30a260] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [04bf9133-a563-45be-aaa0-d7d70c30a260] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [04bf9133-a563-45be-aaa0-d7d70c30a260] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 10.003713656s
addons_test.go:906: (dbg) Run:  kubectl --context addons-665428 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 ssh "cat /opt/local-path-provisioner/pvc-c512f8be-60e7-4823-8659-de8045a39758_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-665428 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-665428 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (18.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-j7hn6" [ed429b16-b5ab-41ee-b109-c010fae4423b] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002697415s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-jtdw9" [993da5db-61f7-415a-ab3f-72602c2b06dc] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003523229s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-665428 addons disable yakd --alsologtostderr -v=1: (5.759345193s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-cwhmr" [5c9f65b0-5a0d-4939-8fa2-1b708daa4020] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004544214s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-665428
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-665428: (11.911881162s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-665428
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-665428
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-665428
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (26.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-493295 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-493295 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.692081549s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-493295 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-493295 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-493295 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-493295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-493295
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-493295: (2.162290023s)
--- PASS: TestCertOptions (26.62s)

                                                
                                    
x
+
TestCertExpiration (234.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-289109 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-289109 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.090730743s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-289109 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-289109 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.848034359s)
helpers_test.go:175: Cleaning up "cert-expiration-289109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-289109
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-289109: (1.983376084s)
--- PASS: TestCertExpiration (234.92s)

                                                
                                    
x
+
TestForceSystemdFlag (29.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-370914 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-370914 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.023002448s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-370914 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-370914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-370914
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-370914: (4.77600487s)
--- PASS: TestForceSystemdFlag (29.07s)

                                                
                                    
x
+
TestForceSystemdEnv (29.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-791631 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0407 13:30:14.094406  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-791631 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.65471914s)
helpers_test.go:175: Cleaning up "force-systemd-env-791631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-791631
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-791631: (2.452507292s)
--- PASS: TestForceSystemdEnv (29.11s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.84s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0407 13:30:01.924230  873820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:30:01.924433  873820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0407 13:30:01.963285  873820 install.go:62] docker-machine-driver-kvm2: exit status 1
W0407 13:30:01.963451  873820 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:30:01.963504  873820 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate319875780/001/docker-machine-driver-kvm2
I0407 13:30:02.241762  873820 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate319875780/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0003f75f8 gz:0xc0003f7810 tar:0xc0003f7680 tar.bz2:0xc0003f76d0 tar.gz:0xc0003f76e0 tar.xz:0xc0003f7740 tar.zst:0xc0003f77f0 tbz2:0xc0003f76d0 tgz:0xc0003f76e0 txz:0xc0003f7740 tzst:0xc0003f77f0 xz:0xc0003f7818 zip:0xc0003f7820 zst:0xc0003f7860] Getters:map[file:0xc001cbc310 http:0xc000812690 https:0xc0008126e0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0407 13:30:02.241824  873820 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate319875780/001/docker-machine-driver-kvm2
I0407 13:30:04.619331  873820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:30:04.619454  873820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0407 13:30:04.656432  873820 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0407 13:30:04.656475  873820 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0407 13:30:04.656630  873820 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:30:04.656677  873820 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate319875780/002/docker-machine-driver-kvm2
I0407 13:30:04.731159  873820 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate319875780/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0003f75f8 gz:0xc0003f7810 tar:0xc0003f7680 tar.bz2:0xc0003f76d0 tar.gz:0xc0003f76e0 tar.xz:0xc0003f7740 tar.zst:0xc0003f77f0 tbz2:0xc0003f76d0 tgz:0xc0003f76e0 txz:0xc0003f7740 tzst:0xc0003f77f0 xz:0xc0003f7818 zip:0xc0003f7820 zst:0xc0003f7860] Getters:map[file:0xc00145a780 http:0xc00064def0 https:0xc0006b0000] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0407 13:30:04.731234  873820 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate319875780/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.84s)

                                                
                                    
x
+
TestErrorSpam/setup (21.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-150811 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-150811 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-150811 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-150811 --driver=docker  --container-runtime=crio: (21.542450947s)
--- PASS: TestErrorSpam/setup (21.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 stop: (1.279481217s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-150811 --log_dir /tmp/nospam-150811 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20598-866963/.minikube/files/etc/test/nested/copy/873820/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-893516 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-893516 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (44.966210644s)
--- PASS: TestFunctional/serial/StartWithProxy (44.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 13:03:52.894115  873820 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-893516 --alsologtostderr -v=8
E0407 13:04:01.220821  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:01.227361  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:01.238872  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:01.260465  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:01.301981  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:01.383638  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:01.545267  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:01.867176  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:02.509338  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:03.791024  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:06.354081  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:11.475613  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:04:21.717825  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-893516 --alsologtostderr -v=8: (31.584102334s)
functional_test.go:680: soft start took 31.584887923s for "functional-893516" cluster.
I0407 13:04:24.478605  873820 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (31.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-893516 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-893516 cache add registry.k8s.io/pause:3.1: (1.12066189s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-893516 cache add registry.k8s.io/pause:3.3: (1.016122761s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-893516 cache add registry.k8s.io/pause:latest: (1.200236816s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-893516 /tmp/TestFunctionalserialCacheCmdcacheadd_local2409016506/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 cache add minikube-local-cache-test:functional-893516
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-893516 cache add minikube-local-cache-test:functional-893516: (1.830289861s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 cache delete minikube-local-cache-test:functional-893516
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-893516
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-893516 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.098898ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 kubectl -- --context functional-893516 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-893516 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-893516 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0407 13:04:42.199981  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-893516 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.294741623s)
functional_test.go:778: restart took 33.294891962s for "functional-893516" cluster.
I0407 13:05:05.899368  873820 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (33.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-893516 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-893516 logs: (1.439342183s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 logs --file /tmp/TestFunctionalserialLogsFileCmd4093502634/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-893516 logs --file /tmp/TestFunctionalserialLogsFileCmd4093502634/001/logs.txt: (1.47890977s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-893516 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-893516
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-893516: exit status 115 (340.184669ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30186 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-893516 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-893516 config get cpus: exit status 14 (68.707267ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-893516 config get cpus: exit status 14 (59.818536ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-893516 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-893516 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 912624: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-893516 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-893516 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (160.69281ms)

                                                
                                                
-- stdout --
	* [functional-893516] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:05:30.861013  911813 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:05:30.861288  911813 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:05:30.861320  911813 out.go:358] Setting ErrFile to fd 2...
	I0407 13:05:30.861328  911813 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:05:30.861523  911813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	I0407 13:05:30.862082  911813 out.go:352] Setting JSON to false
	I0407 13:05:30.863282  911813 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":17274,"bootTime":1744013857,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:05:30.863359  911813 start.go:139] virtualization: kvm guest
	I0407 13:05:30.865469  911813 out.go:177] * [functional-893516] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:05:30.867069  911813 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:05:30.867086  911813 notify.go:220] Checking for updates...
	I0407 13:05:30.870143  911813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:05:30.871438  911813 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	I0407 13:05:30.872673  911813 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	I0407 13:05:30.873826  911813 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:05:30.875025  911813 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:05:30.876916  911813 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:05:30.877451  911813 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:05:30.901204  911813 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:05:30.901384  911813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:05:30.957233  911813 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:55 SystemTime:2025-04-07 13:05:30.947059062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 13:05:30.957425  911813 docker.go:318] overlay module found
	I0407 13:05:30.959784  911813 out.go:177] * Using the docker driver based on existing profile
	I0407 13:05:30.961370  911813 start.go:297] selected driver: docker
	I0407 13:05:30.961391  911813 start.go:901] validating driver "docker" against &{Name:functional-893516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-893516 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:05:30.961478  911813 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:05:30.963986  911813 out.go:201] 
	W0407 13:05:30.965573  911813 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0407 13:05:30.966999  911813 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-893516 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-893516 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-893516 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (165.975945ms)

                                                
                                                
-- stdout --
	* [functional-893516] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:05:30.698177  911671 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:05:30.698296  911671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:05:30.698305  911671 out.go:358] Setting ErrFile to fd 2...
	I0407 13:05:30.698310  911671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:05:30.698594  911671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	I0407 13:05:30.699231  911671 out.go:352] Setting JSON to false
	I0407 13:05:30.700467  911671 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":17274,"bootTime":1744013857,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:05:30.700598  911671 start.go:139] virtualization: kvm guest
	I0407 13:05:30.703085  911671 out.go:177] * [functional-893516] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0407 13:05:30.704915  911671 notify.go:220] Checking for updates...
	I0407 13:05:30.707047  911671 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:05:30.708441  911671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:05:30.709744  911671 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	I0407 13:05:30.711248  911671 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	I0407 13:05:30.712865  911671 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:05:30.714870  911671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:05:30.716821  911671 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:05:30.717526  911671 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:05:30.742380  911671 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:05:30.742554  911671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:05:30.797711  911671 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:55 SystemTime:2025-04-07 13:05:30.78802682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 13:05:30.797812  911671 docker.go:318] overlay module found
	I0407 13:05:30.799717  911671 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0407 13:05:30.801084  911671 start.go:297] selected driver: docker
	I0407 13:05:30.801121  911671 start.go:901] validating driver "docker" against &{Name:functional-893516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-893516 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:05:30.801254  911671 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:05:30.803735  911671 out.go:201] 
	W0407 13:05:30.805089  911671 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 13:05:30.806247  911671 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-893516 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-893516 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-lvfkj" [1fcae8e0-c712-471f-9300-719101309678] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-lvfkj" [1fcae8e0-c712-471f-9300-719101309678] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.004317175s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:30918
functional_test.go:1692: http://192.168.49.2:30918: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-lvfkj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30918
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (14.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [65e5cea7-0d73-4ca2-98f3-c31cbfb4e2db] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004771681s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-893516 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-893516 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-893516 get pvc myclaim -o=json
I0407 13:05:20.509745  873820 retry.go:31] will retry after 2.712059155s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:a2243818-dfb0-4bc7-a454-2a1812b50f4b ResourceVersion:748 Generation:0 CreationTimestamp:2025-04-07 13:05:20 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0018015f0 VolumeMode:0xc001801600 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-893516 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-893516 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1fdbc4c0-2d7f-4070-95c2-e9112fa6bad6] Pending
helpers_test.go:344: "sp-pod" [1fdbc4c0-2d7f-4070-95c2-e9112fa6bad6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1fdbc4c0-2d7f-4070-95c2-e9112fa6bad6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003934498s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-893516 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-893516 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-893516 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e2061046-7458-43b7-8e02-fe97ff983b22] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e2061046-7458-43b7-8e02-fe97ff983b22] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.003152601s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-893516 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh -n functional-893516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 cp functional-893516:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3905323111/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh -n functional-893516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh -n functional-893516 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-893516 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-sfzl2" [0afae7c8-0257-4901-9119-a938f371fc31] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-sfzl2" [0afae7c8-0257-4901-9119-a938f371fc31] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.004500429s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-893516 exec mysql-58ccfd96bb-sfzl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-893516 exec mysql-58ccfd96bb-sfzl2 -- mysql -ppassword -e "show databases;": exit status 1 (117.052334ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 13:06:01.448833  873820 retry.go:31] will retry after 631.251451ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-893516 exec mysql-58ccfd96bb-sfzl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-893516 exec mysql-58ccfd96bb-sfzl2 -- mysql -ppassword -e "show databases;": exit status 1 (108.608416ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 13:06:02.189361  873820 retry.go:31] will retry after 1.266974425s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-893516 exec mysql-58ccfd96bb-sfzl2 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-893516 exec mysql-58ccfd96bb-sfzl2 -- mysql -ppassword -e "show databases;": exit status 1 (108.92291ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 13:06:03.566323  873820 retry.go:31] will retry after 2.080573885s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-893516 exec mysql-58ccfd96bb-sfzl2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/873820/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo cat /etc/test/nested/copy/873820/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/873820.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo cat /etc/ssl/certs/873820.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/873820.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo cat /usr/share/ca-certificates/873820.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/8738202.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo cat /etc/ssl/certs/8738202.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/8738202.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo cat /usr/share/ca-certificates/8738202.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-893516 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-893516 ssh "sudo systemctl is-active docker": exit status 1 (311.229613ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-893516 ssh "sudo systemctl is-active containerd": exit status 1 (291.170482ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-893516 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-893516
localhost/kicbase/echo-server:functional-893516
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-893516 image ls --format short --alsologtostderr:
I0407 13:05:40.426123  914911 out.go:345] Setting OutFile to fd 1 ...
I0407 13:05:40.426690  914911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:40.426749  914911 out.go:358] Setting ErrFile to fd 2...
I0407 13:05:40.426768  914911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:40.427266  914911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
I0407 13:05:40.428396  914911 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:40.428503  914911 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:40.428871  914911 cli_runner.go:164] Run: docker container inspect functional-893516 --format={{.State.Status}}
I0407 13:05:40.447883  914911 ssh_runner.go:195] Run: systemctl --version
I0407 13:05:40.447942  914911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893516
I0407 13:05:40.466030  914911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/functional-893516/id_rsa Username:docker}
I0407 13:05:40.554474  914911 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-893516 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| localhost/minikube-local-cache-test     | functional-893516  | 10ba1c644f220 | 3.33kB |
| localhost/my-image                      | functional-893516  | 63a8de6724709 | 1.47MB |
| docker.io/library/nginx                 | alpine             | 1ff4bb4faebcf | 49.3MB |
| docker.io/library/nginx                 | latest             | 53a18edff8091 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| docker.io/kindest/kindnetd              | v20250214-acbabc1a | df3849d954c98 | 95.7MB |
| localhost/kicbase/echo-server           | functional-893516  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-893516 image ls --format table --alsologtostderr:
I0407 13:05:44.437734  915503 out.go:345] Setting OutFile to fd 1 ...
I0407 13:05:44.437984  915503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:44.437994  915503 out.go:358] Setting ErrFile to fd 2...
I0407 13:05:44.437999  915503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:44.438180  915503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
I0407 13:05:44.438773  915503 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:44.438880  915503 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:44.439256  915503 cli_runner.go:164] Run: docker container inspect functional-893516 --format={{.State.Status}}
I0407 13:05:44.458295  915503 ssh_runner.go:195] Run: systemctl --version
I0407 13:05:44.458362  915503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893516
I0407 13:05:44.475914  915503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/functional-893516/id_rsa Username:docker}
I0407 13:05:44.562448  915503 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-893516 image ls --format json --alsologtostderr:
[{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTag
s":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserve
r:v1.32.2"],"size":"98055648"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f
68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"99e2d947094f3b9a12101ca1787406a45f91253aeca87514d8fdccb01008145a","repoDigests":["docker.io/library/85072564bfe63a37a7f820656fa6659c303907cfee1b96f0f4c0bfb326e29ee5-tmp@sha256:adbd499c61649e1a66ad9f538e11229abf11095341b5f28478a86de99eb741ce"],"repoTags":[],"size":"1465612"},{"id":"1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591","docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49323988"},{"id":"53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0","repoDigests":["docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19"
,"docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4"],"repoTags":["docker.io/library/nginx:latest"],"size":"196159380"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicb
ase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-893516"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495","docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"95703604"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboar
d@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"10ba1c644f2201ddf84a92396357274a151961798ca5aa61609a29de64fdab1e","repoDigests":["localhost/minikube-local-cache-test@sha256:7618a4d7e513268feacdb5fa8cb5f6a132e840c87e6938fa95a913930288ecf8"],"repoTags":["localhost/minikube-local-cache-test:functional-893516"],"size":"3330"},{"id":"63a8de672470986f0744884287e1584f8eba286fed47e09f342b8eee688b4f4b","repoDigests":["localhost/my-image@sha256:d90482818575db99e8d8d98a84491dd255ddd9748fd534d2091335f0f1688830"],"repoTags":["localhost/my-image:functional-893516"],"size":"1468194"},{"id":"a9e7e
6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-893516 image ls --format json --alsologtostderr:
I0407 13:05:44.194315  915412 out.go:345] Setting OutFile to fd 1 ...
I0407 13:05:44.194442  915412 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:44.194453  915412 out.go:358] Setting ErrFile to fd 2...
I0407 13:05:44.194459  915412 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:44.194830  915412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
I0407 13:05:44.195682  915412 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:44.195846  915412 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:44.196472  915412 cli_runner.go:164] Run: docker container inspect functional-893516 --format={{.State.Status}}
I0407 13:05:44.219277  915412 ssh_runner.go:195] Run: systemctl --version
I0407 13:05:44.219330  915412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893516
I0407 13:05:44.241287  915412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/functional-893516/id_rsa Username:docker}
I0407 13:05:44.334134  915412 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-893516 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
- docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc
repoTags:
- docker.io/library/nginx:alpine
size: "49323988"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-893516
size: "4943877"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
- docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "95703604"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 10ba1c644f2201ddf84a92396357274a151961798ca5aa61609a29de64fdab1e
repoDigests:
- localhost/minikube-local-cache-test@sha256:7618a4d7e513268feacdb5fa8cb5f6a132e840c87e6938fa95a913930288ecf8
repoTags:
- localhost/minikube-local-cache-test:functional-893516
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0
repoDigests:
- docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
- docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4
repoTags:
- docker.io/library/nginx:latest
size: "196159380"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-893516 image ls --format yaml --alsologtostderr:
I0407 13:05:40.645364  914959 out.go:345] Setting OutFile to fd 1 ...
I0407 13:05:40.645667  914959 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:40.645773  914959 out.go:358] Setting ErrFile to fd 2...
I0407 13:05:40.645812  914959 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:40.646457  914959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
I0407 13:05:40.647170  914959 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:40.647299  914959 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:40.647699  914959 cli_runner.go:164] Run: docker container inspect functional-893516 --format={{.State.Status}}
I0407 13:05:40.666526  914959 ssh_runner.go:195] Run: systemctl --version
I0407 13:05:40.666582  914959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893516
I0407 13:05:40.684999  914959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/functional-893516/id_rsa Username:docker}
I0407 13:05:40.774398  914959 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-893516 ssh pgrep buildkitd: exit status 1 (247.763766ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image build -t localhost/my-image:functional-893516 testdata/build --alsologtostderr
2025/04/07 13:05:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-893516 image build -t localhost/my-image:functional-893516 testdata/build --alsologtostderr: (3.259558873s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-893516 image build -t localhost/my-image:functional-893516 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 99e2d947094
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-893516
--> 63a8de67247
Successfully tagged localhost/my-image:functional-893516
63a8de672470986f0744884287e1584f8eba286fed47e09f342b8eee688b4f4b
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-893516 image build -t localhost/my-image:functional-893516 testdata/build --alsologtostderr:
I0407 13:05:41.114247  915099 out.go:345] Setting OutFile to fd 1 ...
I0407 13:05:41.114672  915099 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:41.114685  915099 out.go:358] Setting ErrFile to fd 2...
I0407 13:05:41.114691  915099 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:05:41.114909  915099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
I0407 13:05:41.115584  915099 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:41.116197  915099 config.go:182] Loaded profile config "functional-893516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:05:41.116632  915099 cli_runner.go:164] Run: docker container inspect functional-893516 --format={{.State.Status}}
I0407 13:05:41.135434  915099 ssh_runner.go:195] Run: systemctl --version
I0407 13:05:41.135495  915099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893516
I0407 13:05:41.154971  915099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/functional-893516/id_rsa Username:docker}
I0407 13:05:41.246463  915099 build_images.go:161] Building image from path: /tmp/build.2358465861.tar
I0407 13:05:41.246583  915099 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0407 13:05:41.257125  915099 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2358465861.tar
I0407 13:05:41.260820  915099 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2358465861.tar: stat -c "%s %y" /var/lib/minikube/build/build.2358465861.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2358465861.tar': No such file or directory
I0407 13:05:41.260861  915099 ssh_runner.go:362] scp /tmp/build.2358465861.tar --> /var/lib/minikube/build/build.2358465861.tar (3072 bytes)
I0407 13:05:41.284733  915099 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2358465861
I0407 13:05:41.293446  915099 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2358465861 -xf /var/lib/minikube/build/build.2358465861.tar
I0407 13:05:41.302363  915099 crio.go:315] Building image: /var/lib/minikube/build/build.2358465861
I0407 13:05:41.302452  915099 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-893516 /var/lib/minikube/build/build.2358465861 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0407 13:05:44.303055  915099 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-893516 /var/lib/minikube/build/build.2358465861 --cgroup-manager=cgroupfs: (3.000569974s)
I0407 13:05:44.303153  915099 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2358465861
I0407 13:05:44.311570  915099 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2358465861.tar
I0407 13:05:44.319564  915099 build_images.go:217] Built localhost/my-image:functional-893516 from /tmp/build.2358465861.tar
I0407 13:05:44.319600  915099 build_images.go:133] succeeded building to: functional-893516
I0407 13:05:44.319605  915099 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.023118744s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-893516
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-893516 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-893516 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-893516 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-893516 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 908219: os: process already finished
helpers_test.go:502: unable to terminate pid 907891: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-893516 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-893516 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8d8b63ff-93f8-4e45-b6d4-19ec93cbefe0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8d8b63ff-93f8-4e45-b6d4-19ec93cbefe0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003460084s
I0407 13:05:25.377020  873820 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image load --daemon kicbase/echo-server:functional-893516 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-893516 image load --daemon kicbase/echo-server:functional-893516 --alsologtostderr: (1.026826354s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image load --daemon kicbase/echo-server:functional-893516 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-893516
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image load --daemon kicbase/echo-server:functional-893516 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image save kicbase/echo-server:functional-893516 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image rm kicbase/echo-server:functional-893516 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-893516
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 image save --daemon kicbase/echo-server:functional-893516 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-893516
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-893516 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-893516 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-g7s8t" [f5b53d12-275d-4058-a7c4-4165dbc16050] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0407 13:05:23.161823  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-fcfd88b6f-g7s8t" [f5b53d12-275d-4058-a7c4-4165dbc16050] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003664589s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-893516 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.250.55 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-893516 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "323.14044ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "51.266684ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "320.473962ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "56.706378ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdany-port3496877421/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744031126685148596" to /tmp/TestFunctionalparallelMountCmdany-port3496877421/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744031126685148596" to /tmp/TestFunctionalparallelMountCmdany-port3496877421/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744031126685148596" to /tmp/TestFunctionalparallelMountCmdany-port3496877421/001/test-1744031126685148596
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-893516 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (365.088729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 13:05:27.050560  873820 retry.go:31] will retry after 452.109333ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  7 13:05 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  7 13:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  7 13:05 test-1744031126685148596
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh cat /mount-9p/test-1744031126685148596
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-893516 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0b058526-3ac2-453c-ae9f-fa428c76607e] Pending
helpers_test.go:344: "busybox-mount" [0b058526-3ac2-453c-ae9f-fa428c76607e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0b058526-3ac2-453c-ae9f-fa428c76607e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0b058526-3ac2-453c-ae9f-fa428c76607e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.00344112s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-893516 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdany-port3496877421/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 service list -o json
functional_test.go:1511: Took "516.519074ms" to run "out/minikube-linux-amd64 -p functional-893516 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:30564
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:30564
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdspecific-port729807108/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-893516 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.175399ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 13:05:36.915160  873820 retry.go:31] will retry after 516.745142ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdspecific-port729807108/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-893516 ssh "sudo umount -f /mount-9p": exit status 1 (291.691405ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-893516 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdspecific-port729807108/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdVerifyCleanup175444213/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdVerifyCleanup175444213/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdVerifyCleanup175444213/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-893516 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdVerifyCleanup175444213/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdVerifyCleanup175444213/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-893516 /tmp/TestFunctionalparallelMountCmdVerifyCleanup175444213/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-893516 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-893516
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-893516
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-893516
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (99.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-970956 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0407 13:06:45.083368  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-970956 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m39.146417339s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (99.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-970956 -- rollout status deployment/busybox: (4.966009318s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-c65l4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-k6hfd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-kqhtv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-c65l4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-k6hfd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-kqhtv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-c65l4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-k6hfd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-kqhtv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-c65l4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-c65l4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-k6hfd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-k6hfd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-kqhtv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-970956 -- exec busybox-58667487b6-kqhtv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (37.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-970956 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-970956 -v=7 --alsologtostderr: (36.541563916s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (37.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-970956 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp testdata/cp-test.txt ha-970956:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile333189404/001/cp-test_ha-970956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956:/home/docker/cp-test.txt ha-970956-m02:/home/docker/cp-test_ha-970956_ha-970956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m02 "sudo cat /home/docker/cp-test_ha-970956_ha-970956-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956:/home/docker/cp-test.txt ha-970956-m03:/home/docker/cp-test_ha-970956_ha-970956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m03 "sudo cat /home/docker/cp-test_ha-970956_ha-970956-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956:/home/docker/cp-test.txt ha-970956-m04:/home/docker/cp-test_ha-970956_ha-970956-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m04 "sudo cat /home/docker/cp-test_ha-970956_ha-970956-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp testdata/cp-test.txt ha-970956-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile333189404/001/cp-test_ha-970956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m02:/home/docker/cp-test.txt ha-970956:/home/docker/cp-test_ha-970956-m02_ha-970956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956 "sudo cat /home/docker/cp-test_ha-970956-m02_ha-970956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m02:/home/docker/cp-test.txt ha-970956-m03:/home/docker/cp-test_ha-970956-m02_ha-970956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m03 "sudo cat /home/docker/cp-test_ha-970956-m02_ha-970956-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m02:/home/docker/cp-test.txt ha-970956-m04:/home/docker/cp-test_ha-970956-m02_ha-970956-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m04 "sudo cat /home/docker/cp-test_ha-970956-m02_ha-970956-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp testdata/cp-test.txt ha-970956-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile333189404/001/cp-test_ha-970956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m03:/home/docker/cp-test.txt ha-970956:/home/docker/cp-test_ha-970956-m03_ha-970956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956 "sudo cat /home/docker/cp-test_ha-970956-m03_ha-970956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m03:/home/docker/cp-test.txt ha-970956-m02:/home/docker/cp-test_ha-970956-m03_ha-970956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m02 "sudo cat /home/docker/cp-test_ha-970956-m03_ha-970956-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m03:/home/docker/cp-test.txt ha-970956-m04:/home/docker/cp-test_ha-970956-m03_ha-970956-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m04 "sudo cat /home/docker/cp-test_ha-970956-m03_ha-970956-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp testdata/cp-test.txt ha-970956-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile333189404/001/cp-test_ha-970956-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m04:/home/docker/cp-test.txt ha-970956:/home/docker/cp-test_ha-970956-m04_ha-970956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956 "sudo cat /home/docker/cp-test_ha-970956-m04_ha-970956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m04:/home/docker/cp-test.txt ha-970956-m02:/home/docker/cp-test_ha-970956-m04_ha-970956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m02 "sudo cat /home/docker/cp-test_ha-970956-m04_ha-970956-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 cp ha-970956-m04:/home/docker/cp-test.txt ha-970956-m03:/home/docker/cp-test_ha-970956-m04_ha-970956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 ssh -n ha-970956-m03 "sudo cat /home/docker/cp-test_ha-970956-m04_ha-970956-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 node stop m02 -v=7 --alsologtostderr
E0407 13:09:01.225651  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-970956 node stop m02 -v=7 --alsologtostderr: (11.866301721s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr: exit status 7 (665.613399ms)

                                                
                                                
-- stdout --
	ha-970956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-970956-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-970956-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-970956-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:09:04.948403  937004 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:09:04.948510  937004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:09:04.948515  937004 out.go:358] Setting ErrFile to fd 2...
	I0407 13:09:04.948519  937004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:09:04.948728  937004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	I0407 13:09:04.948898  937004 out.go:352] Setting JSON to false
	I0407 13:09:04.948933  937004 mustload.go:65] Loading cluster: ha-970956
	I0407 13:09:04.949053  937004 notify.go:220] Checking for updates...
	I0407 13:09:04.949398  937004 config.go:182] Loaded profile config "ha-970956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:09:04.949427  937004 status.go:174] checking status of ha-970956 ...
	I0407 13:09:04.949944  937004 cli_runner.go:164] Run: docker container inspect ha-970956 --format={{.State.Status}}
	I0407 13:09:04.968243  937004 status.go:371] ha-970956 host status = "Running" (err=<nil>)
	I0407 13:09:04.968273  937004 host.go:66] Checking if "ha-970956" exists ...
	I0407 13:09:04.968531  937004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-970956
	I0407 13:09:04.987924  937004 host.go:66] Checking if "ha-970956" exists ...
	I0407 13:09:04.988312  937004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:09:04.988370  937004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-970956
	I0407 13:09:05.006941  937004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33304 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/ha-970956/id_rsa Username:docker}
	I0407 13:09:05.094811  937004 ssh_runner.go:195] Run: systemctl --version
	I0407 13:09:05.099121  937004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:09:05.109961  937004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:09:05.159925  937004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:74 SystemTime:2025-04-07 13:09:05.150089797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 13:09:05.160489  937004 kubeconfig.go:125] found "ha-970956" server: "https://192.168.49.254:8443"
	I0407 13:09:05.160525  937004 api_server.go:166] Checking apiserver status ...
	I0407 13:09:05.160565  937004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:09:05.171491  937004 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	I0407 13:09:05.181035  937004 api_server.go:182] apiserver freezer: "9:freezer:/docker/f12f5781384b2d42a3c4ca341a9d441035f36b116abc3bc9fb3ecc5f22f8fee0/crio/crio-4ea7b0adb3a359d538c3121b56229543b383444fbaec6dec66a982e8cdfd41d5"
	I0407 13:09:05.181102  937004 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f12f5781384b2d42a3c4ca341a9d441035f36b116abc3bc9fb3ecc5f22f8fee0/crio/crio-4ea7b0adb3a359d538c3121b56229543b383444fbaec6dec66a982e8cdfd41d5/freezer.state
	I0407 13:09:05.189677  937004 api_server.go:204] freezer state: "THAWED"
	I0407 13:09:05.189709  937004 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 13:09:05.193615  937004 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 13:09:05.193648  937004 status.go:463] ha-970956 apiserver status = Running (err=<nil>)
	I0407 13:09:05.193663  937004 status.go:176] ha-970956 status: &{Name:ha-970956 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:09:05.193683  937004 status.go:174] checking status of ha-970956-m02 ...
	I0407 13:09:05.194014  937004 cli_runner.go:164] Run: docker container inspect ha-970956-m02 --format={{.State.Status}}
	I0407 13:09:05.212193  937004 status.go:371] ha-970956-m02 host status = "Stopped" (err=<nil>)
	I0407 13:09:05.212218  937004 status.go:384] host is not running, skipping remaining checks
	I0407 13:09:05.212224  937004 status.go:176] ha-970956-m02 status: &{Name:ha-970956-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:09:05.212247  937004 status.go:174] checking status of ha-970956-m03 ...
	I0407 13:09:05.212544  937004 cli_runner.go:164] Run: docker container inspect ha-970956-m03 --format={{.State.Status}}
	I0407 13:09:05.230485  937004 status.go:371] ha-970956-m03 host status = "Running" (err=<nil>)
	I0407 13:09:05.230514  937004 host.go:66] Checking if "ha-970956-m03" exists ...
	I0407 13:09:05.230790  937004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-970956-m03
	I0407 13:09:05.248590  937004 host.go:66] Checking if "ha-970956-m03" exists ...
	I0407 13:09:05.248852  937004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:09:05.248894  937004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-970956-m03
	I0407 13:09:05.266447  937004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33314 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/ha-970956-m03/id_rsa Username:docker}
	I0407 13:09:05.354673  937004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:09:05.366058  937004 kubeconfig.go:125] found "ha-970956" server: "https://192.168.49.254:8443"
	I0407 13:09:05.366086  937004 api_server.go:166] Checking apiserver status ...
	I0407 13:09:05.366118  937004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:09:05.376918  937004 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup
	I0407 13:09:05.386718  937004 api_server.go:182] apiserver freezer: "9:freezer:/docker/d547864557997e6be4dd6e0003332057805381b180756c48f7435a122eef4b3d/crio/crio-954839e7c88efd42e912710ee53d8a17ae60053c5806ff62aefb8cb192ff00ea"
	I0407 13:09:05.386780  937004 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d547864557997e6be4dd6e0003332057805381b180756c48f7435a122eef4b3d/crio/crio-954839e7c88efd42e912710ee53d8a17ae60053c5806ff62aefb8cb192ff00ea/freezer.state
	I0407 13:09:05.395174  937004 api_server.go:204] freezer state: "THAWED"
	I0407 13:09:05.395219  937004 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 13:09:05.399471  937004 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 13:09:05.399498  937004 status.go:463] ha-970956-m03 apiserver status = Running (err=<nil>)
	I0407 13:09:05.399508  937004 status.go:176] ha-970956-m03 status: &{Name:ha-970956-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:09:05.399523  937004 status.go:174] checking status of ha-970956-m04 ...
	I0407 13:09:05.399783  937004 cli_runner.go:164] Run: docker container inspect ha-970956-m04 --format={{.State.Status}}
	I0407 13:09:05.417822  937004 status.go:371] ha-970956-m04 host status = "Running" (err=<nil>)
	I0407 13:09:05.417847  937004 host.go:66] Checking if "ha-970956-m04" exists ...
	I0407 13:09:05.418095  937004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-970956-m04
	I0407 13:09:05.437789  937004 host.go:66] Checking if "ha-970956-m04" exists ...
	I0407 13:09:05.438125  937004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:09:05.438171  937004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-970956-m04
	I0407 13:09:05.456302  937004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33319 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/ha-970956-m04/id_rsa Username:docker}
	I0407 13:09:05.546771  937004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:09:05.559869  937004 status.go:176] ha-970956-m04 status: &{Name:ha-970956-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 node start m02 -v=7 --alsologtostderr
E0407 13:09:28.925035  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-970956 node start m02 -v=7 --alsologtostderr: (23.410138012s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr: (1.025016835s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.101508445s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (156.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-970956 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-970956 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-970956 -v=7 --alsologtostderr: (36.873034503s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-970956 --wait=true -v=7 --alsologtostderr
E0407 13:10:14.093486  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:14.099950  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:14.111387  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:14.132872  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:14.174445  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:14.256503  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:14.417814  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:14.739679  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:15.381563  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:16.662988  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:19.225496  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:24.347495  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:34.589394  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:10:55.071085  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:36.033494  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-970956 --wait=true -v=7 --alsologtostderr: (1m59.587979604s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-970956
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (156.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-970956 node delete m03 -v=7 --alsologtostderr: (10.742227146s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-970956 stop -v=7 --alsologtostderr: (35.618741279s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr: exit status 7 (109.132765ms)

                                                
                                                
-- stdout --
	ha-970956
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-970956-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-970956-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:12:56.306129  954214 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:12:56.306282  954214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:12:56.306294  954214 out.go:358] Setting ErrFile to fd 2...
	I0407 13:12:56.306299  954214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:12:56.306507  954214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	I0407 13:12:56.306737  954214 out.go:352] Setting JSON to false
	I0407 13:12:56.306770  954214 mustload.go:65] Loading cluster: ha-970956
	I0407 13:12:56.306855  954214 notify.go:220] Checking for updates...
	I0407 13:12:56.307651  954214 config.go:182] Loaded profile config "ha-970956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:12:56.307708  954214 status.go:174] checking status of ha-970956 ...
	I0407 13:12:56.309026  954214 cli_runner.go:164] Run: docker container inspect ha-970956 --format={{.State.Status}}
	I0407 13:12:56.328695  954214 status.go:371] ha-970956 host status = "Stopped" (err=<nil>)
	I0407 13:12:56.328755  954214 status.go:384] host is not running, skipping remaining checks
	I0407 13:12:56.328767  954214 status.go:176] ha-970956 status: &{Name:ha-970956 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:12:56.328795  954214 status.go:174] checking status of ha-970956-m02 ...
	I0407 13:12:56.329095  954214 cli_runner.go:164] Run: docker container inspect ha-970956-m02 --format={{.State.Status}}
	I0407 13:12:56.346673  954214 status.go:371] ha-970956-m02 host status = "Stopped" (err=<nil>)
	I0407 13:12:56.346700  954214 status.go:384] host is not running, skipping remaining checks
	I0407 13:12:56.346709  954214 status.go:176] ha-970956-m02 status: &{Name:ha-970956-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:12:56.346737  954214 status.go:174] checking status of ha-970956-m04 ...
	I0407 13:12:56.346996  954214 cli_runner.go:164] Run: docker container inspect ha-970956-m04 --format={{.State.Status}}
	I0407 13:12:56.364609  954214 status.go:371] ha-970956-m04 host status = "Stopped" (err=<nil>)
	I0407 13:12:56.364641  954214 status.go:384] host is not running, skipping remaining checks
	I0407 13:12:56.364651  954214 status.go:176] ha-970956-m04 status: &{Name:ha-970956-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-970956 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0407 13:12:57.955952  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:01.220967  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-970956 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m34.746048432s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (95.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (40.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-970956 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-970956 --control-plane -v=7 --alsologtostderr: (39.944774626s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-970956 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (40.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0407 13:15:14.093386  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-250073 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0407 13:15:41.798322  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-250073 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (42.308661735s)
--- PASS: TestJSONOutput/start/Command (42.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-250073 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-250073 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-250073 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-250073 --output=json --user=testUser: (5.804420832s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-443100 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-443100 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.641009ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f8674540-c416-4765-a60c-77c1cbab40bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-443100] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cae8ff3-3f94-459d-8d88-22503b145821","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20598"}}
	{"specversion":"1.0","id":"07ba5f81-4b3f-47b2-9262-e8fe86a1c61d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cde628f5-d0b6-4760-b72b-b1b7cc4d6688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig"}}
	{"specversion":"1.0","id":"ce90b13a-1579-4426-8ba5-9bad29bfe391","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube"}}
	{"specversion":"1.0","id":"38d669d6-67ea-45d3-ab9a-a8c100868c35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bd65d0a7-5a57-4c7a-95d5-d9cb5cd2c785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2fed2cfa-c128-4ee1-8d56-29ac56f4928d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-443100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-443100
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-496644 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-496644 --network=: (34.137885296s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-496644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-496644
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-496644: (2.168962745s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-307556 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-307556 --network=bridge: (24.411024484s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-307556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-307556
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-307556: (1.907831068s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.34s)

                                                
                                    
x
+
TestKicExistingNetwork (23.83s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0407 13:17:19.332177  873820 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0407 13:17:19.350745  873820 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0407 13:17:19.350833  873820 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0407 13:17:19.350852  873820 cli_runner.go:164] Run: docker network inspect existing-network
W0407 13:17:19.368966  873820 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0407 13:17:19.368999  873820 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0407 13:17:19.369014  873820 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0407 13:17:19.369142  873820 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 13:17:19.387510  873820 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b88d78535226 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:b6:17:55:4c:3a} reservation:<nil>}
I0407 13:17:19.388025  873820 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005e32a0}
I0407 13:17:19.388064  873820 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0407 13:17:19.388125  873820 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0407 13:17:19.440935  873820 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-961932 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-961932 --network=existing-network: (21.792366372s)
helpers_test.go:175: Cleaning up "existing-network-961932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-961932
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-961932: (1.888580068s)
I0407 13:17:43.140731  873820 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.83s)

                                                
                                    
x
+
TestKicCustomSubnet (24.97s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-657069 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-657069 --subnet=192.168.60.0/24: (22.828309414s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-657069 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-657069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-657069
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-657069: (2.12418029s)
--- PASS: TestKicCustomSubnet (24.97s)

                                                
                                    
x
+
TestKicStaticIP (25.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-581797 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-581797 --static-ip=192.168.200.200: (23.246983001s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-581797 ip
helpers_test.go:175: Cleaning up "static-ip-581797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-581797
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-581797: (2.079746662s)
--- PASS: TestKicStaticIP (25.46s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-091375 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-091375 --driver=docker  --container-runtime=crio: (21.059146934s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-103982 --driver=docker  --container-runtime=crio
E0407 13:19:01.226247  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-103982 --driver=docker  --container-runtime=crio: (24.607885105s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-091375
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-103982
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-103982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-103982
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-103982: (1.882844216s)
helpers_test.go:175: Cleaning up "first-091375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-091375
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-091375: (2.259822533s)
--- PASS: TestMinikubeProfile (51.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-323457 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-323457 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.042848767s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-323457 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-340651 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-340651 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.980851323s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340651 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-323457 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-323457 --alsologtostderr -v=5: (1.62314672s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340651 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-340651
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-340651: (1.185388319s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-340651
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-340651: (6.816179375s)
--- PASS: TestMountStart/serial/RestartStopped (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340651 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094855 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0407 13:20:14.093932  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:24.286449  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-094855 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m16.8329103s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-094855 -- rollout status deployment/busybox: (4.104491839s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-gn4q9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-t8j28 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-gn4q9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-t8j28 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-gn4q9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-t8j28 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-gn4q9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-gn4q9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-t8j28 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094855 -- exec busybox-58667487b6-t8j28 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (33.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-094855 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-094855 -v 3 --alsologtostderr: (32.596968045s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (33.22s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-094855 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp testdata/cp-test.txt multinode-094855:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp multinode-094855:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1040622541/001/cp-test_multinode-094855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp multinode-094855:/home/docker/cp-test.txt multinode-094855-m02:/home/docker/cp-test_multinode-094855_multinode-094855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m02 "sudo cat /home/docker/cp-test_multinode-094855_multinode-094855-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp multinode-094855:/home/docker/cp-test.txt multinode-094855-m03:/home/docker/cp-test_multinode-094855_multinode-094855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m03 "sudo cat /home/docker/cp-test_multinode-094855_multinode-094855-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp testdata/cp-test.txt multinode-094855-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp multinode-094855-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1040622541/001/cp-test_multinode-094855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp multinode-094855-m02:/home/docker/cp-test.txt multinode-094855:/home/docker/cp-test_multinode-094855-m02_multinode-094855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855 "sudo cat /home/docker/cp-test_multinode-094855-m02_multinode-094855.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp multinode-094855-m02:/home/docker/cp-test.txt multinode-094855-m03:/home/docker/cp-test_multinode-094855-m02_multinode-094855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m03 "sudo cat /home/docker/cp-test_multinode-094855-m02_multinode-094855-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp testdata/cp-test.txt multinode-094855-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp multinode-094855-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1040622541/001/cp-test_multinode-094855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp multinode-094855-m03:/home/docker/cp-test.txt multinode-094855:/home/docker/cp-test_multinode-094855-m03_multinode-094855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855 "sudo cat /home/docker/cp-test_multinode-094855-m03_multinode-094855.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 cp multinode-094855-m03:/home/docker/cp-test.txt multinode-094855-m02:/home/docker/cp-test_multinode-094855-m03_multinode-094855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 ssh -n multinode-094855-m02 "sudo cat /home/docker/cp-test_multinode-094855-m03_multinode-094855-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-094855 node stop m03: (1.186465457s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-094855 status: exit status 7 (471.527729ms)

                                                
                                                
-- stdout --
	multinode-094855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-094855-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-094855-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-094855 status --alsologtostderr: exit status 7 (493.834017ms)

                                                
                                                
-- stdout --
	multinode-094855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-094855-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-094855-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:21:59.922498 1020537 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:21:59.922780 1020537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:21:59.922790 1020537 out.go:358] Setting ErrFile to fd 2...
	I0407 13:21:59.922794 1020537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:21:59.923026 1020537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	I0407 13:21:59.923193 1020537 out.go:352] Setting JSON to false
	I0407 13:21:59.923226 1020537 mustload.go:65] Loading cluster: multinode-094855
	I0407 13:21:59.923272 1020537 notify.go:220] Checking for updates...
	I0407 13:21:59.923708 1020537 config.go:182] Loaded profile config "multinode-094855": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:21:59.923744 1020537 status.go:174] checking status of multinode-094855 ...
	I0407 13:21:59.924236 1020537 cli_runner.go:164] Run: docker container inspect multinode-094855 --format={{.State.Status}}
	I0407 13:21:59.946915 1020537 status.go:371] multinode-094855 host status = "Running" (err=<nil>)
	I0407 13:21:59.946955 1020537 host.go:66] Checking if "multinode-094855" exists ...
	I0407 13:21:59.947215 1020537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-094855
	I0407 13:21:59.965457 1020537 host.go:66] Checking if "multinode-094855" exists ...
	I0407 13:21:59.965812 1020537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:21:59.965877 1020537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-094855
	I0407 13:21:59.984827 1020537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/multinode-094855/id_rsa Username:docker}
	I0407 13:22:00.075152 1020537 ssh_runner.go:195] Run: systemctl --version
	I0407 13:22:00.080252 1020537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:22:00.092966 1020537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:22:00.145904 1020537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:64 SystemTime:2025-04-07 13:22:00.135894594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 13:22:00.146504 1020537 kubeconfig.go:125] found "multinode-094855" server: "https://192.168.67.2:8443"
	I0407 13:22:00.146538 1020537 api_server.go:166] Checking apiserver status ...
	I0407 13:22:00.146577 1020537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:22:00.158278 1020537 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1523/cgroup
	I0407 13:22:00.169030 1020537 api_server.go:182] apiserver freezer: "9:freezer:/docker/cc1358581af2de422aebcc7f4f4133d28431cd0f0b3257e98ee22e08abbf0d4f/crio/crio-bbc3771712c085cca7bcdf9694a3c07b64cac0e005a30b141545b16aa8780463"
	I0407 13:22:00.169115 1020537 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cc1358581af2de422aebcc7f4f4133d28431cd0f0b3257e98ee22e08abbf0d4f/crio/crio-bbc3771712c085cca7bcdf9694a3c07b64cac0e005a30b141545b16aa8780463/freezer.state
	I0407 13:22:00.177618 1020537 api_server.go:204] freezer state: "THAWED"
	I0407 13:22:00.177655 1020537 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0407 13:22:00.181555 1020537 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0407 13:22:00.181586 1020537 status.go:463] multinode-094855 apiserver status = Running (err=<nil>)
	I0407 13:22:00.181600 1020537 status.go:176] multinode-094855 status: &{Name:multinode-094855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:22:00.181621 1020537 status.go:174] checking status of multinode-094855-m02 ...
	I0407 13:22:00.181905 1020537 cli_runner.go:164] Run: docker container inspect multinode-094855-m02 --format={{.State.Status}}
	I0407 13:22:00.200925 1020537 status.go:371] multinode-094855-m02 host status = "Running" (err=<nil>)
	I0407 13:22:00.200955 1020537 host.go:66] Checking if "multinode-094855-m02" exists ...
	I0407 13:22:00.201209 1020537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-094855-m02
	I0407 13:22:00.219556 1020537 host.go:66] Checking if "multinode-094855-m02" exists ...
	I0407 13:22:00.219874 1020537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:22:00.219926 1020537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-094855-m02
	I0407 13:22:00.240274 1020537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/20598-866963/.minikube/machines/multinode-094855-m02/id_rsa Username:docker}
	I0407 13:22:00.330763 1020537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:22:00.342459 1020537 status.go:176] multinode-094855-m02 status: &{Name:multinode-094855-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:22:00.342509 1020537 status.go:174] checking status of multinode-094855-m03 ...
	I0407 13:22:00.342796 1020537 cli_runner.go:164] Run: docker container inspect multinode-094855-m03 --format={{.State.Status}}
	I0407 13:22:00.362592 1020537 status.go:371] multinode-094855-m03 host status = "Stopped" (err=<nil>)
	I0407 13:22:00.362622 1020537 status.go:384] host is not running, skipping remaining checks
	I0407 13:22:00.362629 1020537 status.go:176] multinode-094855-m03 status: &{Name:multinode-094855-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-094855 node start m03 -v=7 --alsologtostderr: (8.428773114s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (84.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-094855
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-094855
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-094855: (24.816710469s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094855 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-094855 --wait=true -v=8 --alsologtostderr: (59.168302501s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-094855
--- PASS: TestMultiNode/serial/RestartKeepsNodes (84.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-094855 node delete m03: (4.422656609s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 stop
E0407 13:24:01.220737  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-094855 stop: (23.643744205s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-094855 status: exit status 7 (89.865288ms)

                                                
                                                
-- stdout --
	multinode-094855
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-094855-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-094855 status --alsologtostderr: exit status 7 (90.95894ms)

                                                
                                                
-- stdout --
	multinode-094855
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-094855-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:24:02.356100 1029940 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:24:02.356637 1029940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:24:02.356732 1029940 out.go:358] Setting ErrFile to fd 2...
	I0407 13:24:02.356756 1029940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:24:02.357217 1029940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	I0407 13:24:02.357546 1029940 out.go:352] Setting JSON to false
	I0407 13:24:02.357668 1029940 mustload.go:65] Loading cluster: multinode-094855
	I0407 13:24:02.357793 1029940 notify.go:220] Checking for updates...
	I0407 13:24:02.358408 1029940 config.go:182] Loaded profile config "multinode-094855": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:24:02.358437 1029940 status.go:174] checking status of multinode-094855 ...
	I0407 13:24:02.358862 1029940 cli_runner.go:164] Run: docker container inspect multinode-094855 --format={{.State.Status}}
	I0407 13:24:02.377320 1029940 status.go:371] multinode-094855 host status = "Stopped" (err=<nil>)
	I0407 13:24:02.377353 1029940 status.go:384] host is not running, skipping remaining checks
	I0407 13:24:02.377362 1029940 status.go:176] multinode-094855 status: &{Name:multinode-094855 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:24:02.377407 1029940 status.go:174] checking status of multinode-094855-m02 ...
	I0407 13:24:02.377704 1029940 cli_runner.go:164] Run: docker container inspect multinode-094855-m02 --format={{.State.Status}}
	I0407 13:24:02.397394 1029940 status.go:371] multinode-094855-m02 host status = "Stopped" (err=<nil>)
	I0407 13:24:02.397439 1029940 status.go:384] host is not running, skipping remaining checks
	I0407 13:24:02.397447 1029940 status.go:176] multinode-094855-m02 status: &{Name:multinode-094855-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094855 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-094855 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (44.506771875s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094855 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-094855
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094855-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-094855-m02 --driver=docker  --container-runtime=crio: exit status 14 (74.043994ms)

                                                
                                                
-- stdout --
	* [multinode-094855-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-094855-m02' is duplicated with machine name 'multinode-094855-m02' in profile 'multinode-094855'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094855-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-094855-m03 --driver=docker  --container-runtime=crio: (23.634734001s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-094855
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-094855: exit status 80 (287.212288ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-094855 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-094855-m03 already exists in multinode-094855-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-094855-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-094855-m03: (1.93466521s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.99s)

                                                
                                    
x
+
TestPreload (116.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-394396 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-394396 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m18.094947797s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-394396 image pull gcr.io/k8s-minikube/busybox
E0407 13:26:37.159846  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-394396 image pull gcr.io/k8s-minikube/busybox: (3.522343345s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-394396
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-394396: (5.742264988s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-394396 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-394396 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (26.400957626s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-394396 image list
helpers_test.go:175: Cleaning up "test-preload-394396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-394396
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-394396: (2.423492372s)
--- PASS: TestPreload (116.42s)

                                                
                                    
x
+
TestScheduledStopUnix (97.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-758955 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-758955 --memory=2048 --driver=docker  --container-runtime=crio: (21.502064569s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-758955 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-758955 -n scheduled-stop-758955
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-758955 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0407 13:27:35.933563  873820 retry.go:31] will retry after 66.254µs: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.934732  873820 retry.go:31] will retry after 113.104µs: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.935877  873820 retry.go:31] will retry after 287.634µs: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.937031  873820 retry.go:31] will retry after 348.938µs: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.938129  873820 retry.go:31] will retry after 557.307µs: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.939271  873820 retry.go:31] will retry after 846.856µs: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.940414  873820 retry.go:31] will retry after 1.576839ms: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.942695  873820 retry.go:31] will retry after 1.603203ms: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.945010  873820 retry.go:31] will retry after 3.692413ms: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.949329  873820 retry.go:31] will retry after 5.353609ms: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.955629  873820 retry.go:31] will retry after 3.220049ms: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.959977  873820 retry.go:31] will retry after 6.274552ms: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.966612  873820 retry.go:31] will retry after 12.133797ms: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:35.979913  873820 retry.go:31] will retry after 22.204846ms: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
I0407 13:27:36.003185  873820 retry.go:31] will retry after 36.931197ms: open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/scheduled-stop-758955/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-758955 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-758955 -n scheduled-stop-758955
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-758955
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-758955 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-758955
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-758955: exit status 7 (70.596086ms)

                                                
                                                
-- stdout --
	scheduled-stop-758955
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-758955 -n scheduled-stop-758955
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-758955 -n scheduled-stop-758955: exit status 7 (71.185532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-758955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-758955
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-758955: (4.602780633s)
--- PASS: TestScheduledStopUnix (97.50s)

                                                
                                    
x
+
TestInsufficientStorage (10.38s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-672018 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-672018 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.962424596s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1cf4c9de-9adb-481d-801e-b8cd965d1f8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-672018] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7c57868-6320-4e92-a1e7-dd4b229e882b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20598"}}
	{"specversion":"1.0","id":"a0752083-c97e-462b-a49e-c8fad505aafa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3d0941ae-90a9-48b2-ab78-ef1067744705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig"}}
	{"specversion":"1.0","id":"5e057e57-64f7-40ed-b8bd-3e175ab94b11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube"}}
	{"specversion":"1.0","id":"9f90d3bc-0cb4-4e13-b3df-7e18331737e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"19e9ff89-7a23-4e7f-b4f9-2165586c690b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"69cb87b7-c9ae-4bb1-a872-090e04ebbd08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"90e3c87d-3bfc-40d2-a9a2-913ed8ddf193","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0d5c3bda-1601-42f5-95c8-ddec0c979817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"25c666bb-d03d-4c0f-8c67-0165b1bd79cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3b2ff0f0-ea70-4cbf-a5de-8e017c12fb31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-672018\" primary control-plane node in \"insufficient-storage-672018\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"37354f99-7d1b-444f-9605-a21f0044e753","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1743675393-20591 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"aab9f79e-1ec7-4088-86bf-d89ec52d32c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ed84296-396c-4a19-ad3e-fc1b50064848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-672018 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-672018 --output=json --layout=cluster: exit status 7 (264.704777ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-672018","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-672018","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:28:59.725419 1052144 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-672018" does not appear in /home/jenkins/minikube-integration/20598-866963/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-672018 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-672018 --output=json --layout=cluster: exit status 7 (280.139202ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-672018","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-672018","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:29:00.005391 1052241 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-672018" does not appear in /home/jenkins/minikube-integration/20598-866963/kubeconfig
	E0407 13:29:00.015963 1052241 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/insufficient-storage-672018/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-672018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-672018
E0407 13:29:01.221472  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-672018: (1.87265866s)
--- PASS: TestInsufficientStorage (10.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.141506703 start -p running-upgrade-552795 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.141506703 start -p running-upgrade-552795 --memory=2200 --vm-driver=docker  --container-runtime=crio: (26.275759837s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-552795 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-552795 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.562735317s)
helpers_test.go:175: Cleaning up "running-upgrade-552795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-552795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-552795: (5.900018806s)
--- PASS: TestRunningBinaryUpgrade (54.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (329.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214605 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-214605 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.12214615s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-214605
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-214605: (4.392775453s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-214605 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-214605 status --format={{.Host}}: exit status 7 (68.494383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214605 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-214605 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.83283586s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-214605 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214605 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-214605 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (79.562959ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-214605] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-214605
	    minikube start -p kubernetes-upgrade-214605 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2146052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-214605 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214605 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-214605 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (11.539629622s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-214605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-214605
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-214605: (2.200230163s)
--- PASS: TestKubernetesUpgrade (329.31s)

                                                
                                    
x
+
TestMissingContainerUpgrade (158.04s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2209242690 start -p missing-upgrade-035685 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2209242690 start -p missing-upgrade-035685 --memory=2200 --driver=docker  --container-runtime=crio: (1m29.393248494s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-035685
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-035685: (11.008293827s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-035685
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-035685 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-035685 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.9396486s)
helpers_test.go:175: Cleaning up "missing-upgrade-035685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-035685
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-035685: (1.990270305s)
--- PASS: TestMissingContainerUpgrade (158.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (132.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2561137659 start -p stopped-upgrade-343617 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2561137659 start -p stopped-upgrade-343617 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m30.240887518s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2561137659 -p stopped-upgrade-343617 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2561137659 -p stopped-upgrade-343617 stop: (3.081590698s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-343617 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-343617 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.432561749s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (132.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-343617
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-343617: (1.325229671s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-207072 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-207072 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (151.321938ms)

                                                
                                                
-- stdout --
	* [false-207072] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:31:54.347140 1091803 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:31:54.347393 1091803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:31:54.347403 1091803 out.go:358] Setting ErrFile to fd 2...
	I0407 13:31:54.347420 1091803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:31:54.347652 1091803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-866963/.minikube/bin
	I0407 13:31:54.348250 1091803 out.go:352] Setting JSON to false
	I0407 13:31:54.349541 1091803 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":18857,"bootTime":1744013857,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:31:54.349618 1091803 start.go:139] virtualization: kvm guest
	I0407 13:31:54.352086 1091803 out.go:177] * [false-207072] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:31:54.353747 1091803 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:31:54.353772 1091803 notify.go:220] Checking for updates...
	I0407 13:31:54.356349 1091803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:31:54.357772 1091803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	I0407 13:31:54.359238 1091803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	I0407 13:31:54.360524 1091803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:31:54.361921 1091803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:31:54.363696 1091803 config.go:182] Loaded profile config "cert-expiration-289109": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:31:54.363794 1091803 config.go:182] Loaded profile config "kubernetes-upgrade-214605": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:31:54.363872 1091803 config.go:182] Loaded profile config "running-upgrade-552795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0407 13:31:54.363948 1091803 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:31:54.387042 1091803 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 13:31:54.387226 1091803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:31:54.440086 1091803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:74 SystemTime:2025-04-07 13:31:54.429501328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 13:31:54.440240 1091803 docker.go:318] overlay module found
	I0407 13:31:54.442274 1091803 out.go:177] * Using the docker driver based on user configuration
	I0407 13:31:54.443625 1091803 start.go:297] selected driver: docker
	I0407 13:31:54.443646 1091803 start.go:901] validating driver "docker" against <nil>
	I0407 13:31:54.443663 1091803 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:31:54.445989 1091803 out.go:201] 
	W0407 13:31:54.447163 1091803 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0407 13:31:54.448261 1091803 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-207072 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-207072" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:31:30 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-289109
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:30:02 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-214605
contexts:
- context:
cluster: cert-expiration-289109
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:31:30 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-289109
name: cert-expiration-289109
- context:
cluster: kubernetes-upgrade-214605
user: kubernetes-upgrade-214605
name: kubernetes-upgrade-214605
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-289109
user:
client-certificate: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/cert-expiration-289109/client.crt
client-key: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/cert-expiration-289109/client.key
- name: kubernetes-upgrade-214605
user:
client-certificate: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/kubernetes-upgrade-214605/client.crt
client-key: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/kubernetes-upgrade-214605/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-207072

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-207072"

                                                
                                                
----------------------- debugLogs end: false-207072 [took: 3.14392925s] --------------------------------
helpers_test.go:175: Cleaning up "false-207072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-207072
--- PASS: TestNetworkPlugins/group/false (3.48s)

                                                
                                    
x
+
TestPause/serial/Start (45.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-500554 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-500554 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (45.319588937s)
--- PASS: TestPause/serial/Start (45.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896149 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-896149 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (76.521247ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-896149] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-866963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-866963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896149 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-896149 --driver=docker  --container-runtime=crio: (21.787710297s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-896149 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-500554 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-500554 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.672461266s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896149 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-896149 --no-kubernetes --driver=docker  --container-runtime=crio: (4.068148152s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-896149 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-896149 status -o json: exit status 2 (289.933952ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-896149","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-896149
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-896149: (1.973544152s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896149 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-896149 --no-kubernetes --driver=docker  --container-runtime=crio: (5.5049426s)
--- PASS: TestNoKubernetes/serial/Start (5.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-896149 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-896149 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.331673ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.85122557s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.79s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-500554 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-500554 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-500554 --output=json --layout=cluster: exit status 2 (334.705335ms)

                                                
                                                
-- stdout --
	{"Name":"pause-500554","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-500554","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-500554 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-896149
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-896149: (1.207602816s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-500554 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.72s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-500554 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-500554 --alsologtostderr -v=5: (2.717825785s)
--- PASS: TestPause/serial/DeletePaused (2.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896149 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-896149 --driver=docker  --container-runtime=crio: (9.7575595s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-500554
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-500554: exit status 1 (17.842748ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-500554: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-975237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-975237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m20.296446228s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-896149 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-896149 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.605271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-829179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0407 13:34:01.221237  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-829179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (1m2.552262106s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-153810 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-153810 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (48.264680626s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-829179 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af2ef910-b467-4e36-aad1-ac8d0741b699] Pending
helpers_test.go:344: "busybox" [af2ef910-b467-4e36-aad1-ac8d0741b699] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af2ef910-b467-4e36-aad1-ac8d0741b699] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.00373114s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-829179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-829179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-829179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.078317765s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-829179 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-829179 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-829179 --alsologtostderr -v=3: (12.066855393s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-646999 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-646999 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (42.746492117s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-829179 -n no-preload-829179
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-829179 -n no-preload-829179: exit status 7 (81.761189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-829179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-829179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0407 13:35:14.094220  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-829179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m23.431264537s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-829179 -n no-preload-829179
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-153810 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f88e72c1-9414-462e-9234-0867f147a704] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f88e72c1-9414-462e-9234-0867f147a704] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004027924s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-153810 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-153810 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-153810 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-153810 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-153810 --alsologtostderr -v=3: (12.076025022s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-646999 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8586c1be-e21b-4fbe-a324-1f58e2a012db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8586c1be-e21b-4fbe-a324-1f58e2a012db] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.006159681s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-646999 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153810 -n embed-certs-153810
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153810 -n embed-certs-153810: exit status 7 (91.739314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-153810 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-153810 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-153810 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m57.14410789s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153810 -n embed-certs-153810
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-975237 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b2a5faa-cbae-4fcd-8db4-4b3521cdee07] Pending
helpers_test.go:344: "busybox" [1b2a5faa-cbae-4fcd-8db4-4b3521cdee07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b2a5faa-cbae-4fcd-8db4-4b3521cdee07] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004046694s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-975237 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-646999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-646999 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-646999 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-646999 --alsologtostderr -v=3: (12.80296732s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-975237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-975237 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-975237 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-975237 --alsologtostderr -v=3: (12.012000892s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-646999 -n default-k8s-diff-port-646999
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-646999 -n default-k8s-diff-port-646999: exit status 7 (82.970893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-646999 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-646999 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-646999 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m22.151206858s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-646999 -n default-k8s-diff-port-646999
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-975237 -n old-k8s-version-975237
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-975237 -n old-k8s-version-975237: exit status 7 (100.784405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-975237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (131.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-975237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0407 13:37:04.288790  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-975237 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m10.729907989s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-975237 -n old-k8s-version-975237
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (131.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kvfcc" [9020a00b-0cc7-4a2d-9a24-9e8b86e95374] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003118712s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kvfcc" [9020a00b-0cc7-4a2d-9a24-9e8b86e95374] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003234871s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-975237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-975237 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-975237 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-975237 -n old-k8s-version-975237
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-975237 -n old-k8s-version-975237: exit status 2 (303.82439ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-975237 -n old-k8s-version-975237
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-975237 -n old-k8s-version-975237: exit status 2 (303.713247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-975237 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-975237 -n old-k8s-version-975237
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-975237 -n old-k8s-version-975237
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-064407 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0407 13:39:01.221015  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/addons-665428/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-064407 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (27.884909938s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-064407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-064407 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-064407 --alsologtostderr -v=3: (1.196811123s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-064407 -n newest-cni-064407
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-064407 -n newest-cni-064407: exit status 7 (71.704423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-064407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-064407 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-064407 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (12.932535978s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-064407 -n newest-cni-064407
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-064407 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-064407 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-064407 -n newest-cni-064407
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-064407 -n newest-cni-064407: exit status 2 (360.979161ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-064407 -n newest-cni-064407
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-064407 -n newest-cni-064407: exit status 2 (358.577459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-064407 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-064407 -n newest-cni-064407
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-064407 -n newest-cni-064407
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-d4nz5" [ae36fb12-e8aa-41bc-bf66-e6598244528e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003502336s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.239537121s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-d4nz5" [ae36fb12-e8aa-41bc-bf66-e6598244528e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003208357s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-829179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-829179 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-829179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-829179 -n no-preload-829179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-829179 -n no-preload-829179: exit status 2 (312.224426ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-829179 -n no-preload-829179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-829179 -n no-preload-829179: exit status 2 (325.683785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-829179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-829179 -n no-preload-829179
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-829179 -n no-preload-829179
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0407 13:40:14.093542  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/functional-893516/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.893307162s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-207072 "pgrep -a kubelet"
I0407 13:40:14.808795  873820 config.go:182] Loaded profile config "auto-207072": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-207072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4bwbb" [f1ad2ec3-8044-4e11-ba63-333664354f8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4bwbb" [f1ad2ec3-8044-4e11-ba63-333664354f8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003283435s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-207072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rkcz8" [33f8e525-5357-49eb-8405-85c9dc74504c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003993959s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ftrcz" [023831d4-fb93-44a8-9d12-ec1888dd104e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004429035s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rkcz8" [33f8e525-5357-49eb-8405-85c9dc74504c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004140432s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-646999 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-207072 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hfq4w" [4c7b3b88-7721-4c00-a8f0-a0086ff05456] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004185735s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-646999 image list --format=json
I0407 13:40:40.732095  873820 config.go:182] Loaded profile config "flannel-207072": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-207072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4b6bs" [2c1a74d2-19f4-43b4-9f97-d7d1a2b5adfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4b6bs" [2c1a74d2-19f4-43b4-9f97-d7d1a2b5adfc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004066517s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-646999 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-646999 -n default-k8s-diff-port-646999
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-646999 -n default-k8s-diff-port-646999: exit status 2 (382.19318ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-646999 -n default-k8s-diff-port-646999
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-646999 -n default-k8s-diff-port-646999: exit status 2 (369.545626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-646999 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-646999 -n default-k8s-diff-port-646999
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-646999 -n default-k8s-diff-port-646999
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.044864838s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hfq4w" [4c7b3b88-7721-4c00-a8f0-a0086ff05456] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004496392s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-153810 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0407 13:40:50.466089  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:50.472644  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:50.484231  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:50.505949  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:50.547206  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:50.628523  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:50.790288  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:51.111924  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (56.560598765s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-153810 image list --format=json
E0407 13:40:51.754022  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-207072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-153810 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153810 -n embed-certs-153810
E0407 13:40:53.035758  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153810 -n embed-certs-153810: exit status 2 (319.497326ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-153810 -n embed-certs-153810
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-153810 -n embed-certs-153810: exit status 2 (319.788667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-153810 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-153810 --alsologtostderr -v=1: (1.054688656s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153810 -n embed-certs-153810
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-153810 -n embed-certs-153810
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (49.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0407 13:41:00.719252  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:41:10.961039  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (49.149445442s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (49.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0407 13:41:31.443623  873820 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/old-k8s-version-975237/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (40.01445159s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-207072 "pgrep -a kubelet"
I0407 13:41:44.882387  873820 config.go:182] Loaded profile config "custom-flannel-207072": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-207072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bjbtp" [c9ab4367-3f35-4f78-bae1-5f8a9389e1ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bjbtp" [c9ab4367-3f35-4f78-bae1-5f8a9389e1ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003721073s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sdrg6" [67f8855a-6b6b-4ea4-80d0-89b18063f516] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003489755s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vq967" [601f6a5c-3e6a-4b8e-873c-4fc54d567935] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004342235s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-207072 "pgrep -a kubelet"
I0407 13:41:53.907824  873820 config.go:182] Loaded profile config "kindnet-207072": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-207072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l5mlg" [999fc92f-b64c-4dd6-994c-10ac03514b9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-l5mlg" [999fc92f-b64c-4dd6-994c-10ac03514b9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004117562s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-207072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-207072 "pgrep -a kubelet"
I0407 13:41:55.306016  873820 config.go:182] Loaded profile config "calico-207072": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-207072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l9f5m" [d76bec1e-22ed-4604-bf22-1c5e89562e68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
I0407 13:41:55.702163  873820 config.go:182] Loaded profile config "bridge-207072": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
helpers_test.go:344: "netcat-5d86dc444-l9f5m" [d76bec1e-22ed-4604-bf22-1c5e89562e68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005398879s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-207072 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-207072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ljv4h" [1bc4bbf3-e23b-473e-a092-bd8f6e383490] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ljv4h" [1bc4bbf3-e23b-473e-a092-bd8f6e383490] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00391693s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-207072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-207072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-207072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-207072 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (38.553642643s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-207072 "pgrep -a kubelet"
I0407 13:42:54.173561  873820 config.go:182] Loaded profile config "enable-default-cni-207072": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-207072 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8wt8z" [bab36813-7843-4c49-a263-b343ed5abf5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8wt8z" [bab36813-7843-4c49-a263-b343ed5abf5e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00327851s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-207072 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-207072 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    

Test skip (27/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-665428 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-923270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-923270
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-207072 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-207072" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:31:30 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-289109
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:30:02 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-214605
contexts:
- context:
cluster: cert-expiration-289109
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:31:30 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-289109
name: cert-expiration-289109
- context:
cluster: kubernetes-upgrade-214605
user: kubernetes-upgrade-214605
name: kubernetes-upgrade-214605
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-289109
user:
client-certificate: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/cert-expiration-289109/client.crt
client-key: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/cert-expiration-289109/client.key
- name: kubernetes-upgrade-214605
user:
client-certificate: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/kubernetes-upgrade-214605/client.crt
client-key: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/kubernetes-upgrade-214605/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-207072

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-207072"

                                                
                                                
----------------------- debugLogs end: kubenet-207072 [took: 3.16611804s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-207072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-207072
--- SKIP: TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-207072 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-207072" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:31:30 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-289109
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20598-866963/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:30:02 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-214605
contexts:
- context:
cluster: cert-expiration-289109
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:31:30 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-289109
name: cert-expiration-289109
- context:
cluster: kubernetes-upgrade-214605
user: kubernetes-upgrade-214605
name: kubernetes-upgrade-214605
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-289109
user:
client-certificate: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/cert-expiration-289109/client.crt
client-key: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/cert-expiration-289109/client.key
- name: kubernetes-upgrade-214605
user:
client-certificate: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/kubernetes-upgrade-214605/client.crt
client-key: /home/jenkins/minikube-integration/20598-866963/.minikube/profiles/kubernetes-upgrade-214605/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-207072

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-207072" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-207072"

                                                
                                                
----------------------- debugLogs end: cilium-207072 [took: 3.782131516s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-207072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-207072
--- SKIP: TestNetworkPlugins/group/cilium (3.94s)

                                                
                                    
Copied to clipboard