Test Report: Docker_Linux_crio_arm64 20345

                    
                      cc513754b073c495fe9720da434c9dc88a403a6c:2025-02-04:38213
                    
                

Test fail (1/331)

Order failed test Duration
36 TestAddons/parallel/Ingress 156.64
x
+
TestAddons/parallel/Ingress (156.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-405803 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-405803 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-405803 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2feb6ee4-9c31-4086-989c-23ba8606ba51] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2feb6ee4-9c31-4086-989c-23ba8606ba51] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003513559s
I0204 18:22:51.416632  304949 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-405803 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.970531331s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-405803 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-405803
helpers_test.go:235: (dbg) docker inspect addons-405803:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9bad677025169f7baaa8822244cf2368514a2848aeb636956188a80ead935a2f",
	        "Created": "2025-02-04T18:19:06.915487547Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306216,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-04T18:19:07.070577849Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/9bad677025169f7baaa8822244cf2368514a2848aeb636956188a80ead935a2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9bad677025169f7baaa8822244cf2368514a2848aeb636956188a80ead935a2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/9bad677025169f7baaa8822244cf2368514a2848aeb636956188a80ead935a2f/hosts",
	        "LogPath": "/var/lib/docker/containers/9bad677025169f7baaa8822244cf2368514a2848aeb636956188a80ead935a2f/9bad677025169f7baaa8822244cf2368514a2848aeb636956188a80ead935a2f-json.log",
	        "Name": "/addons-405803",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-405803:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-405803",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d61fe3189269b403e3aabbb49cc6283b75a546745bbf36f66f5d31e4f25b4f69-init/diff:/var/lib/docker/overlay2/f52a607be58f73e27942172f2cfef1951f7c90b777170cdf97b9b982410c7f7c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d61fe3189269b403e3aabbb49cc6283b75a546745bbf36f66f5d31e4f25b4f69/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d61fe3189269b403e3aabbb49cc6283b75a546745bbf36f66f5d31e4f25b4f69/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d61fe3189269b403e3aabbb49cc6283b75a546745bbf36f66f5d31e4f25b4f69/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-405803",
	                "Source": "/var/lib/docker/volumes/addons-405803/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-405803",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-405803",
	                "name.minikube.sigs.k8s.io": "addons-405803",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e93d2ce80d5c8fe82432bb3f89ebf8a0158e2d412976aa84b01e608b00cffc4e",
	            "SandboxKey": "/var/run/docker/netns/e93d2ce80d5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-405803": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5334d10cfcb39e914e285597b6085c310a8903af3b70d3a71620385c62d53bc2",
	                    "EndpointID": "458a25e347873320cc3de36adc4fbe86af204ca2df40834081149b79f65d0131",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-405803",
	                        "9bad67702516"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-405803 -n addons-405803
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-405803 logs -n 25: (1.688776226s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-305189                                                                     | download-only-305189   | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC | 04 Feb 25 18:18 UTC |
	| start   | --download-only -p                                                                          | download-docker-153301 | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC |                     |
	|         | download-docker-153301                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-153301                                                                   | download-docker-153301 | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC | 04 Feb 25 18:18 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-388273   | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC |                     |
	|         | binary-mirror-388273                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41211                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-388273                                                                     | binary-mirror-388273   | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC | 04 Feb 25 18:18 UTC |
	| addons  | enable dashboard -p                                                                         | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC |                     |
	|         | addons-405803                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC |                     |
	|         | addons-405803                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-405803 --wait=true                                                                | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC | 04 Feb 25 18:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-405803 addons disable                                                                | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:21 UTC | 04 Feb 25 18:21 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-405803 addons disable                                                                | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:21 UTC | 04 Feb 25 18:22 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:22 UTC | 04 Feb 25 18:22 UTC |
	|         | -p addons-405803                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-405803 addons disable                                                                | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:22 UTC | 04 Feb 25 18:22 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-405803 ip                                                                            | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:22 UTC | 04 Feb 25 18:22 UTC |
	| addons  | addons-405803 addons disable                                                                | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:22 UTC | 04 Feb 25 18:22 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-405803 addons                                                                        | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:22 UTC | 04 Feb 25 18:22 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-405803 addons                                                                        | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:22 UTC | 04 Feb 25 18:22 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-405803 ssh curl -s                                                                   | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:22 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-405803 addons                                                                        | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:23 UTC | 04 Feb 25 18:23 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-405803 addons                                                                        | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:23 UTC | 04 Feb 25 18:23 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-405803 addons                                                                        | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:23 UTC | 04 Feb 25 18:23 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-405803 addons disable                                                                | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:23 UTC | 04 Feb 25 18:23 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-405803 ssh cat                                                                       | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:23 UTC | 04 Feb 25 18:23 UTC |
	|         | /opt/local-path-provisioner/pvc-ad381d1e-0adf-4704-b4a1-94f012121e12_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-405803 addons disable                                                                | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:23 UTC | 04 Feb 25 18:23 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-405803 addons                                                                        | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:23 UTC | 04 Feb 25 18:23 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-405803 ip                                                                            | addons-405803          | jenkins | v1.35.0 | 04 Feb 25 18:25 UTC | 04 Feb 25 18:25 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/04 18:18:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0204 18:18:41.648132  305713 out.go:345] Setting OutFile to fd 1 ...
	I0204 18:18:41.648355  305713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:18:41.648387  305713 out.go:358] Setting ErrFile to fd 2...
	I0204 18:18:41.648409  305713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:18:41.648770  305713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	I0204 18:18:41.649310  305713 out.go:352] Setting JSON to false
	I0204 18:18:41.650227  305713 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7271,"bootTime":1738685851,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0204 18:18:41.650366  305713 start.go:139] virtualization:  
	I0204 18:18:41.654008  305713 out.go:177] * [addons-405803] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0204 18:18:41.657645  305713 out.go:177]   - MINIKUBE_LOCATION=20345
	I0204 18:18:41.657796  305713 notify.go:220] Checking for updates...
	I0204 18:18:41.664565  305713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0204 18:18:41.667444  305713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	I0204 18:18:41.670253  305713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	I0204 18:18:41.673253  305713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0204 18:18:41.676067  305713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0204 18:18:41.679191  305713 driver.go:394] Setting default libvirt URI to qemu:///system
	I0204 18:18:41.704808  305713 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0204 18:18:41.704933  305713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:18:41.764722  305713 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-04 18:18:41.755220901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:18:41.764837  305713 docker.go:318] overlay module found
	I0204 18:18:41.767883  305713 out.go:177] * Using the docker driver based on user configuration
	I0204 18:18:41.770626  305713 start.go:297] selected driver: docker
	I0204 18:18:41.770645  305713 start.go:901] validating driver "docker" against <nil>
	I0204 18:18:41.770673  305713 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0204 18:18:41.771385  305713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:18:41.822801  305713 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-04 18:18:41.814207185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:18:41.822998  305713 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0204 18:18:41.823225  305713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0204 18:18:41.826222  305713 out.go:177] * Using Docker driver with root privileges
	I0204 18:18:41.829050  305713 cni.go:84] Creating CNI manager for ""
	I0204 18:18:41.829124  305713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0204 18:18:41.829138  305713 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0204 18:18:41.829227  305713 start.go:340] cluster config:
	{Name:addons-405803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-405803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0204 18:18:41.832404  305713 out.go:177] * Starting "addons-405803" primary control-plane node in "addons-405803" cluster
	I0204 18:18:41.835124  305713 cache.go:121] Beginning downloading kic base image for docker with crio
	I0204 18:18:41.837979  305713 out.go:177] * Pulling base image v0.0.46 ...
	I0204 18:18:41.840742  305713 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0204 18:18:41.840770  305713 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0204 18:18:41.840790  305713 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20345-299426/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0204 18:18:41.840798  305713 cache.go:56] Caching tarball of preloaded images
	I0204 18:18:41.840900  305713 preload.go:172] Found /home/jenkins/minikube-integration/20345-299426/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0204 18:18:41.840910  305713 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0204 18:18:41.841268  305713 profile.go:143] Saving config to /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/config.json ...
	I0204 18:18:41.841306  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/config.json: {Name:mkc0fdc9b1bb81fc48f383eee036975ce6f185c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:18:41.856255  305713 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0204 18:18:41.856399  305713 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0204 18:18:41.856427  305713 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0204 18:18:41.856438  305713 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0204 18:18:41.856446  305713 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0204 18:18:41.856452  305713 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I0204 18:18:59.201151  305713 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
	I0204 18:18:59.201191  305713 cache.go:230] Successfully downloaded all kic artifacts
	I0204 18:18:59.201220  305713 start.go:360] acquireMachinesLock for addons-405803: {Name:mk27cd1a672f3d0875dd041d675ce2e805cfe045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0204 18:18:59.201337  305713 start.go:364] duration metric: took 94.931µs to acquireMachinesLock for "addons-405803"
	I0204 18:18:59.201368  305713 start.go:93] Provisioning new machine with config: &{Name:addons-405803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-405803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0204 18:18:59.201440  305713 start.go:125] createHost starting for "" (driver="docker")
	I0204 18:18:59.204883  305713 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0204 18:18:59.205128  305713 start.go:159] libmachine.API.Create for "addons-405803" (driver="docker")
	I0204 18:18:59.205170  305713 client.go:168] LocalClient.Create starting
	I0204 18:18:59.205298  305713 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20345-299426/.minikube/certs/ca.pem
	I0204 18:18:59.741554  305713 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20345-299426/.minikube/certs/cert.pem
	I0204 18:19:01.062470  305713 cli_runner.go:164] Run: docker network inspect addons-405803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0204 18:19:01.079250  305713 cli_runner.go:211] docker network inspect addons-405803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0204 18:19:01.079335  305713 network_create.go:284] running [docker network inspect addons-405803] to gather additional debugging logs...
	I0204 18:19:01.079357  305713 cli_runner.go:164] Run: docker network inspect addons-405803
	W0204 18:19:01.096196  305713 cli_runner.go:211] docker network inspect addons-405803 returned with exit code 1
	I0204 18:19:01.096226  305713 network_create.go:287] error running [docker network inspect addons-405803]: docker network inspect addons-405803: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-405803 not found
	I0204 18:19:01.096240  305713 network_create.go:289] output of [docker network inspect addons-405803]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-405803 not found
	
	** /stderr **
	I0204 18:19:01.096346  305713 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0204 18:19:01.113410  305713 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ccfeb0}
	I0204 18:19:01.113467  305713 network_create.go:124] attempt to create docker network addons-405803 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0204 18:19:01.113531  305713 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-405803 addons-405803
	I0204 18:19:01.190416  305713 network_create.go:108] docker network addons-405803 192.168.49.0/24 created
	I0204 18:19:01.190453  305713 kic.go:121] calculated static IP "192.168.49.2" for the "addons-405803" container
	I0204 18:19:01.190537  305713 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0204 18:19:01.206884  305713 cli_runner.go:164] Run: docker volume create addons-405803 --label name.minikube.sigs.k8s.io=addons-405803 --label created_by.minikube.sigs.k8s.io=true
	I0204 18:19:01.225972  305713 oci.go:103] Successfully created a docker volume addons-405803
	I0204 18:19:01.226100  305713 cli_runner.go:164] Run: docker run --rm --name addons-405803-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-405803 --entrypoint /usr/bin/test -v addons-405803:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0204 18:19:02.652092  305713 cli_runner.go:217] Completed: docker run --rm --name addons-405803-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-405803 --entrypoint /usr/bin/test -v addons-405803:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (1.425948258s)
	I0204 18:19:02.652128  305713 oci.go:107] Successfully prepared a docker volume addons-405803
	I0204 18:19:02.652149  305713 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0204 18:19:02.652169  305713 kic.go:194] Starting extracting preloaded images to volume ...
	I0204 18:19:02.652303  305713 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20345-299426/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-405803:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0204 18:19:06.845688  305713 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20345-299426/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-405803:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.193341369s)
	I0204 18:19:06.845725  305713 kic.go:203] duration metric: took 4.193552073s to extract preloaded images to volume ...
	W0204 18:19:06.845868  305713 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0204 18:19:06.845981  305713 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0204 18:19:06.899733  305713 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-405803 --name addons-405803 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-405803 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-405803 --network addons-405803 --ip 192.168.49.2 --volume addons-405803:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0204 18:19:07.236424  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Running}}
	I0204 18:19:07.259892  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:07.290811  305713 cli_runner.go:164] Run: docker exec addons-405803 stat /var/lib/dpkg/alternatives/iptables
	I0204 18:19:07.344564  305713 oci.go:144] the created container "addons-405803" has a running status.
	I0204 18:19:07.344592  305713 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa...
	I0204 18:19:07.643530  305713 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0204 18:19:07.668026  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:07.695106  305713 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0204 18:19:07.695125  305713 kic_runner.go:114] Args: [docker exec --privileged addons-405803 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0204 18:19:07.772403  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:07.800406  305713 machine.go:93] provisionDockerMachine start ...
	I0204 18:19:07.800495  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:07.827807  305713 main.go:141] libmachine: Using SSH client type: native
	I0204 18:19:07.828064  305713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414280] 0x416ac0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I0204 18:19:07.828073  305713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0204 18:19:08.016482  305713 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-405803
	
	I0204 18:19:08.016510  305713 ubuntu.go:169] provisioning hostname "addons-405803"
	I0204 18:19:08.016593  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:08.042800  305713 main.go:141] libmachine: Using SSH client type: native
	I0204 18:19:08.043085  305713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414280] 0x416ac0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I0204 18:19:08.043099  305713 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-405803 && echo "addons-405803" | sudo tee /etc/hostname
	I0204 18:19:08.197357  305713 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-405803
	
	I0204 18:19:08.197438  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:08.217024  305713 main.go:141] libmachine: Using SSH client type: native
	I0204 18:19:08.217283  305713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414280] 0x416ac0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I0204 18:19:08.217306  305713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-405803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-405803/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-405803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0204 18:19:08.350762  305713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0204 18:19:08.350841  305713 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20345-299426/.minikube CaCertPath:/home/jenkins/minikube-integration/20345-299426/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20345-299426/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20345-299426/.minikube}
	I0204 18:19:08.350879  305713 ubuntu.go:177] setting up certificates
	I0204 18:19:08.350925  305713 provision.go:84] configureAuth start
	I0204 18:19:08.351025  305713 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-405803
	I0204 18:19:08.368685  305713 provision.go:143] copyHostCerts
	I0204 18:19:08.368775  305713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20345-299426/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20345-299426/.minikube/ca.pem (1078 bytes)
	I0204 18:19:08.368904  305713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20345-299426/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20345-299426/.minikube/cert.pem (1123 bytes)
	I0204 18:19:08.368964  305713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20345-299426/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20345-299426/.minikube/key.pem (1679 bytes)
	I0204 18:19:08.369016  305713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20345-299426/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20345-299426/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20345-299426/.minikube/certs/ca-key.pem org=jenkins.addons-405803 san=[127.0.0.1 192.168.49.2 addons-405803 localhost minikube]
	I0204 18:19:08.779553  305713 provision.go:177] copyRemoteCerts
	I0204 18:19:08.779631  305713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0204 18:19:08.779675  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:08.797042  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:08.890688  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0204 18:19:08.916507  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0204 18:19:08.941173  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0204 18:19:08.965828  305713 provision.go:87] duration metric: took 614.872167ms to configureAuth
	I0204 18:19:08.965857  305713 ubuntu.go:193] setting minikube options for container-runtime
	I0204 18:19:08.966081  305713 config.go:182] Loaded profile config "addons-405803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0204 18:19:08.966220  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:08.983021  305713 main.go:141] libmachine: Using SSH client type: native
	I0204 18:19:08.983262  305713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414280] 0x416ac0 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I0204 18:19:08.983279  305713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0204 18:19:09.206395  305713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0204 18:19:09.206462  305713 machine.go:96] duration metric: took 1.406036221s to provisionDockerMachine
	I0204 18:19:09.206487  305713 client.go:171] duration metric: took 10.001305061s to LocalClient.Create
	I0204 18:19:09.206515  305713 start.go:167] duration metric: took 10.001387431s to libmachine.API.Create "addons-405803"
	I0204 18:19:09.206550  305713 start.go:293] postStartSetup for "addons-405803" (driver="docker")
	I0204 18:19:09.206582  305713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0204 18:19:09.206680  305713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0204 18:19:09.206746  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:09.224425  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:09.317444  305713 ssh_runner.go:195] Run: cat /etc/os-release
	I0204 18:19:09.320498  305713 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0204 18:19:09.320581  305713 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0204 18:19:09.320599  305713 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0204 18:19:09.320607  305713 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0204 18:19:09.320617  305713 filesync.go:126] Scanning /home/jenkins/minikube-integration/20345-299426/.minikube/addons for local assets ...
	I0204 18:19:09.320693  305713 filesync.go:126] Scanning /home/jenkins/minikube-integration/20345-299426/.minikube/files for local assets ...
	I0204 18:19:09.320719  305713 start.go:296] duration metric: took 114.143997ms for postStartSetup
	I0204 18:19:09.321037  305713 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-405803
	I0204 18:19:09.337511  305713 profile.go:143] Saving config to /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/config.json ...
	I0204 18:19:09.337878  305713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0204 18:19:09.337941  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:09.354414  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:09.440730  305713 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0204 18:19:09.445005  305713 start.go:128] duration metric: took 10.24354863s to createHost
	I0204 18:19:09.445032  305713 start.go:83] releasing machines lock for "addons-405803", held for 10.243682879s
	I0204 18:19:09.445105  305713 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-405803
	I0204 18:19:09.462359  305713 ssh_runner.go:195] Run: cat /version.json
	I0204 18:19:09.462403  305713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0204 18:19:09.462412  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:09.462467  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:09.484417  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:09.488576  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:09.715033  305713 ssh_runner.go:195] Run: systemctl --version
	I0204 18:19:09.719389  305713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0204 18:19:09.860848  305713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0204 18:19:09.865342  305713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0204 18:19:09.887039  305713 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0204 18:19:09.887114  305713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0204 18:19:09.921450  305713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0204 18:19:09.921476  305713 start.go:495] detecting cgroup driver to use...
	I0204 18:19:09.921512  305713 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0204 18:19:09.921584  305713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0204 18:19:09.937260  305713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0204 18:19:09.948784  305713 docker.go:217] disabling cri-docker service (if available) ...
	I0204 18:19:09.948897  305713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0204 18:19:09.963324  305713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0204 18:19:09.978769  305713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0204 18:19:10.076473  305713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0204 18:19:10.175211  305713 docker.go:233] disabling docker service ...
	I0204 18:19:10.175283  305713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0204 18:19:10.194841  305713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0204 18:19:10.207262  305713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0204 18:19:10.296327  305713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0204 18:19:10.395614  305713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0204 18:19:10.407983  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0204 18:19:10.425276  305713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0204 18:19:10.425395  305713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0204 18:19:10.435359  305713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0204 18:19:10.435455  305713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0204 18:19:10.445990  305713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0204 18:19:10.456555  305713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0204 18:19:10.467473  305713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0204 18:19:10.477400  305713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0204 18:19:10.487680  305713 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0204 18:19:10.504019  305713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0204 18:19:10.513876  305713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0204 18:19:10.522862  305713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0204 18:19:10.531520  305713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0204 18:19:10.626871  305713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0204 18:19:10.739432  305713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0204 18:19:10.739563  305713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0204 18:19:10.743170  305713 start.go:563] Will wait 60s for crictl version
	I0204 18:19:10.743276  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:19:10.746664  305713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0204 18:19:10.785981  305713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0204 18:19:10.786121  305713 ssh_runner.go:195] Run: crio --version
	I0204 18:19:10.825153  305713 ssh_runner.go:195] Run: crio --version
	I0204 18:19:10.870918  305713 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0204 18:19:10.874036  305713 cli_runner.go:164] Run: docker network inspect addons-405803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0204 18:19:10.890482  305713 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0204 18:19:10.894079  305713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0204 18:19:10.905027  305713 kubeadm.go:883] updating cluster {Name:addons-405803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-405803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0204 18:19:10.905151  305713 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0204 18:19:10.905214  305713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0204 18:19:10.984252  305713 crio.go:514] all images are preloaded for cri-o runtime.
	I0204 18:19:10.984274  305713 crio.go:433] Images already preloaded, skipping extraction
	I0204 18:19:10.984334  305713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0204 18:19:11.031661  305713 crio.go:514] all images are preloaded for cri-o runtime.
	I0204 18:19:11.031680  305713 cache_images.go:84] Images are preloaded, skipping loading
	I0204 18:19:11.031688  305713 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 crio true true} ...
	I0204 18:19:11.031781  305713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-405803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-405803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0204 18:19:11.031864  305713 ssh_runner.go:195] Run: crio config
	I0204 18:19:11.101748  305713 cni.go:84] Creating CNI manager for ""
	I0204 18:19:11.101796  305713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0204 18:19:11.101837  305713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0204 18:19:11.101875  305713 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-405803 NodeName:addons-405803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0204 18:19:11.102094  305713 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-405803"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0204 18:19:11.102213  305713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0204 18:19:11.112506  305713 binaries.go:44] Found k8s binaries, skipping transfer
	I0204 18:19:11.112590  305713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0204 18:19:11.122367  305713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0204 18:19:11.141977  305713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0204 18:19:11.161985  305713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0204 18:19:11.181767  305713 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0204 18:19:11.185534  305713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0204 18:19:11.197217  305713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0204 18:19:11.287893  305713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0204 18:19:11.301272  305713 certs.go:68] Setting up /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803 for IP: 192.168.49.2
	I0204 18:19:11.301353  305713 certs.go:194] generating shared ca certs ...
	I0204 18:19:11.301385  305713 certs.go:226] acquiring lock for ca certs: {Name:mk2ba4fbcce08471c2f461d11db5884e97db5cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:11.301540  305713 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20345-299426/.minikube/ca.key
	I0204 18:19:11.618299  305713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20345-299426/.minikube/ca.crt ...
	I0204 18:19:11.618332  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/ca.crt: {Name:mk381ea304035e0070ccc646a3a188378ed94a26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:11.618533  305713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20345-299426/.minikube/ca.key ...
	I0204 18:19:11.618547  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/ca.key: {Name:mkdca6d45814fe9d51b4358c4160cc2af435973f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:11.618635  305713 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20345-299426/.minikube/proxy-client-ca.key
	I0204 18:19:11.749827  305713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20345-299426/.minikube/proxy-client-ca.crt ...
	I0204 18:19:11.749856  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/proxy-client-ca.crt: {Name:mke69aff31e0449b7600ae23f99933efe9632ef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:11.750035  305713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20345-299426/.minikube/proxy-client-ca.key ...
	I0204 18:19:11.750049  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/proxy-client-ca.key: {Name:mk169f996eb5f5a0623c1aa73692d6c29eaa829b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:11.750133  305713 certs.go:256] generating profile certs ...
	I0204 18:19:11.750200  305713 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.key
	I0204 18:19:11.750218  305713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt with IP's: []
	I0204 18:19:12.121914  305713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt ...
	I0204 18:19:12.121950  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: {Name:mk266c6dd47fe66e531c231d4f9aa7318b59175c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:12.122138  305713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.key ...
	I0204 18:19:12.122151  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.key: {Name:mkf0165a94ec0f3e10ddf697a11bcdf5600b525b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:12.122233  305713 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.key.de78d7d3
	I0204 18:19:12.122257  305713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.crt.de78d7d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0204 18:19:12.375073  305713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.crt.de78d7d3 ...
	I0204 18:19:12.375108  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.crt.de78d7d3: {Name:mked08acdcb697db06c9b4fea31be5c2d727ef36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:12.376775  305713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.key.de78d7d3 ...
	I0204 18:19:12.376798  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.key.de78d7d3: {Name:mke86466af62b427af31080db3d84cc434592269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:12.378169  305713 certs.go:381] copying /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.crt.de78d7d3 -> /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.crt
	I0204 18:19:12.378262  305713 certs.go:385] copying /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.key.de78d7d3 -> /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.key
	I0204 18:19:12.379109  305713 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/proxy-client.key
	I0204 18:19:12.379137  305713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/proxy-client.crt with IP's: []
	I0204 18:19:12.965316  305713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/proxy-client.crt ...
	I0204 18:19:12.965347  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/proxy-client.crt: {Name:mkd75697a3cdf0b7f0d0139fc197fb5994f4925e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:12.965522  305713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/proxy-client.key ...
	I0204 18:19:12.965537  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/proxy-client.key: {Name:mk4f9cfaf4995f9a6e7fa1d7ebd0e73c70bec46a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:12.965743  305713 certs.go:484] found cert: /home/jenkins/minikube-integration/20345-299426/.minikube/certs/ca-key.pem (1679 bytes)
	I0204 18:19:12.965787  305713 certs.go:484] found cert: /home/jenkins/minikube-integration/20345-299426/.minikube/certs/ca.pem (1078 bytes)
	I0204 18:19:12.965813  305713 certs.go:484] found cert: /home/jenkins/minikube-integration/20345-299426/.minikube/certs/cert.pem (1123 bytes)
	I0204 18:19:12.965838  305713 certs.go:484] found cert: /home/jenkins/minikube-integration/20345-299426/.minikube/certs/key.pem (1679 bytes)
	I0204 18:19:12.966478  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0204 18:19:12.993315  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0204 18:19:13.021086  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0204 18:19:13.046265  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0204 18:19:13.070589  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0204 18:19:13.095750  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0204 18:19:13.120213  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0204 18:19:13.144426  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0204 18:19:13.168895  305713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20345-299426/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0204 18:19:13.193181  305713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0204 18:19:13.210496  305713 ssh_runner.go:195] Run: openssl version
	I0204 18:19:13.215641  305713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0204 18:19:13.224975  305713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0204 18:19:13.228341  305713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  4 18:19 /usr/share/ca-certificates/minikubeCA.pem
	I0204 18:19:13.228445  305713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0204 18:19:13.235119  305713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0204 18:19:13.244442  305713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0204 18:19:13.247639  305713 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0204 18:19:13.247693  305713 kubeadm.go:392] StartCluster: {Name:addons-405803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-405803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0204 18:19:13.247777  305713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0204 18:19:13.247839  305713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0204 18:19:13.284222  305713 cri.go:89] found id: ""
	I0204 18:19:13.284299  305713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0204 18:19:13.293347  305713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0204 18:19:13.302160  305713 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0204 18:19:13.302237  305713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0204 18:19:13.311204  305713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0204 18:19:13.311265  305713 kubeadm.go:157] found existing configuration files:
	
	I0204 18:19:13.311330  305713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0204 18:19:13.320606  305713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0204 18:19:13.320699  305713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0204 18:19:13.329096  305713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0204 18:19:13.338242  305713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0204 18:19:13.338428  305713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0204 18:19:13.346651  305713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0204 18:19:13.355069  305713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0204 18:19:13.355166  305713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0204 18:19:13.363496  305713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0204 18:19:13.372158  305713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0204 18:19:13.372287  305713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0204 18:19:13.380955  305713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0204 18:19:13.422711  305713 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0204 18:19:13.422949  305713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0204 18:19:13.446878  305713 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0204 18:19:13.447000  305713 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0204 18:19:13.447064  305713 kubeadm.go:310] OS: Linux
	I0204 18:19:13.447133  305713 kubeadm.go:310] CGROUPS_CPU: enabled
	I0204 18:19:13.447208  305713 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0204 18:19:13.447283  305713 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0204 18:19:13.447358  305713 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0204 18:19:13.447428  305713 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0204 18:19:13.447502  305713 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0204 18:19:13.447575  305713 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0204 18:19:13.447652  305713 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0204 18:19:13.447720  305713 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0204 18:19:13.508737  305713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0204 18:19:13.508931  305713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0204 18:19:13.509051  305713 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0204 18:19:13.520536  305713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0204 18:19:13.525349  305713 out.go:235]   - Generating certificates and keys ...
	I0204 18:19:13.525606  305713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0204 18:19:13.525740  305713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0204 18:19:14.126944  305713 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0204 18:19:14.522162  305713 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0204 18:19:15.224715  305713 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0204 18:19:15.562103  305713 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0204 18:19:15.787241  305713 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0204 18:19:15.787391  305713 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-405803 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0204 18:19:16.293303  305713 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0204 18:19:16.293641  305713 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-405803 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0204 18:19:16.853758  305713 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0204 18:19:17.308944  305713 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0204 18:19:17.895295  305713 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0204 18:19:17.895579  305713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0204 18:19:18.396700  305713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0204 18:19:18.593763  305713 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0204 18:19:18.840148  305713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0204 18:19:19.277656  305713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0204 18:19:19.963397  305713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0204 18:19:19.964135  305713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0204 18:19:19.968948  305713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0204 18:19:19.972598  305713 out.go:235]   - Booting up control plane ...
	I0204 18:19:19.972705  305713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0204 18:19:19.972797  305713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0204 18:19:19.973803  305713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0204 18:19:19.984979  305713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0204 18:19:19.991786  305713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0204 18:19:19.991841  305713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0204 18:19:20.094717  305713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0204 18:19:20.094841  305713 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0204 18:19:21.096481  305713 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001848727s
	I0204 18:19:21.096572  305713 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0204 18:19:27.099225  305713 kubeadm.go:310] [api-check] The API server is healthy after 6.002758891s
	I0204 18:19:27.121809  305713 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0204 18:19:27.137917  305713 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0204 18:19:27.172993  305713 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0204 18:19:27.173191  305713 kubeadm.go:310] [mark-control-plane] Marking the node addons-405803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0204 18:19:27.185539  305713 kubeadm.go:310] [bootstrap-token] Using token: tj3oj2.7h4q63qf72f3wds5
	I0204 18:19:27.188413  305713 out.go:235]   - Configuring RBAC rules ...
	I0204 18:19:27.188545  305713 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0204 18:19:27.193952  305713 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0204 18:19:27.205170  305713 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0204 18:19:27.211990  305713 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0204 18:19:27.216756  305713 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0204 18:19:27.222622  305713 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0204 18:19:27.507153  305713 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0204 18:19:27.972667  305713 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0204 18:19:28.506231  305713 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0204 18:19:28.507495  305713 kubeadm.go:310] 
	I0204 18:19:28.507568  305713 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0204 18:19:28.507574  305713 kubeadm.go:310] 
	I0204 18:19:28.507651  305713 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0204 18:19:28.507656  305713 kubeadm.go:310] 
	I0204 18:19:28.507682  305713 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0204 18:19:28.507741  305713 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0204 18:19:28.507792  305713 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0204 18:19:28.507797  305713 kubeadm.go:310] 
	I0204 18:19:28.507851  305713 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0204 18:19:28.507856  305713 kubeadm.go:310] 
	I0204 18:19:28.507904  305713 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0204 18:19:28.507909  305713 kubeadm.go:310] 
	I0204 18:19:28.507962  305713 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0204 18:19:28.508037  305713 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0204 18:19:28.508106  305713 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0204 18:19:28.508110  305713 kubeadm.go:310] 
	I0204 18:19:28.508217  305713 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0204 18:19:28.508296  305713 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0204 18:19:28.508301  305713 kubeadm.go:310] 
	I0204 18:19:28.508385  305713 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tj3oj2.7h4q63qf72f3wds5 \
	I0204 18:19:28.508487  305713 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1791642cb6005ccb969ca9bf078dd50584feac0ef4af79a56d7c4e860bec6d7b \
	I0204 18:19:28.508515  305713 kubeadm.go:310] 	--control-plane 
	I0204 18:19:28.508520  305713 kubeadm.go:310] 
	I0204 18:19:28.508605  305713 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0204 18:19:28.508610  305713 kubeadm.go:310] 
	I0204 18:19:28.508691  305713 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tj3oj2.7h4q63qf72f3wds5 \
	I0204 18:19:28.508793  305713 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1791642cb6005ccb969ca9bf078dd50584feac0ef4af79a56d7c4e860bec6d7b 
	I0204 18:19:28.511279  305713 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0204 18:19:28.511559  305713 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0204 18:19:28.511689  305713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0204 18:19:28.511709  305713 cni.go:84] Creating CNI manager for ""
	I0204 18:19:28.511717  305713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0204 18:19:28.516655  305713 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0204 18:19:28.519679  305713 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0204 18:19:28.524150  305713 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0204 18:19:28.524228  305713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0204 18:19:28.544548  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0204 18:19:28.832590  305713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0204 18:19:28.832722  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0204 18:19:28.832774  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-405803 minikube.k8s.io/updated_at=2025_02_04T18_19_28_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=2ad12868b53d667fdb2ff045ead964d3d2f95148 minikube.k8s.io/name=addons-405803 minikube.k8s.io/primary=true
	I0204 18:19:29.015379  305713 ops.go:34] apiserver oom_adj: -16
	I0204 18:19:29.015528  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0204 18:19:29.515883  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0204 18:19:30.016096  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0204 18:19:30.516459  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0204 18:19:31.016066  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0204 18:19:31.515742  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0204 18:19:32.015970  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0204 18:19:32.516321  305713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0204 18:19:32.625800  305713 kubeadm.go:1113] duration metric: took 3.793129368s to wait for elevateKubeSystemPrivileges
	I0204 18:19:32.625830  305713 kubeadm.go:394] duration metric: took 19.378140304s to StartCluster
	I0204 18:19:32.625847  305713 settings.go:142] acquiring lock: {Name:mk4bc017e9421f7469cd44441a2e64bd5c305941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:32.625954  305713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20345-299426/kubeconfig
	I0204 18:19:32.626341  305713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20345-299426/kubeconfig: {Name:mk3d1cc03c24874b9249b98b2ff642db4ee61973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0204 18:19:32.626528  305713 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0204 18:19:32.626635  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0204 18:19:32.626887  305713 config.go:182] Loaded profile config "addons-405803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0204 18:19:32.626937  305713 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0204 18:19:32.627009  305713 addons.go:69] Setting yakd=true in profile "addons-405803"
	I0204 18:19:32.627023  305713 addons.go:238] Setting addon yakd=true in "addons-405803"
	I0204 18:19:32.627045  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.627542  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.627787  305713 addons.go:69] Setting inspektor-gadget=true in profile "addons-405803"
	I0204 18:19:32.627804  305713 addons.go:238] Setting addon inspektor-gadget=true in "addons-405803"
	I0204 18:19:32.627827  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.628248  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.628614  305713 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-405803"
	I0204 18:19:32.628653  305713 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-405803"
	I0204 18:19:32.628692  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.629141  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.629384  305713 addons.go:69] Setting metrics-server=true in profile "addons-405803"
	I0204 18:19:32.629409  305713 addons.go:238] Setting addon metrics-server=true in "addons-405803"
	I0204 18:19:32.629432  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.629926  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.636417  305713 addons.go:69] Setting cloud-spanner=true in profile "addons-405803"
	I0204 18:19:32.636451  305713 addons.go:238] Setting addon cloud-spanner=true in "addons-405803"
	I0204 18:19:32.636499  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.637053  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.637474  305713 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-405803"
	I0204 18:19:32.637497  305713 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-405803"
	I0204 18:19:32.637522  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.637940  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.651160  305713 addons.go:69] Setting registry=true in profile "addons-405803"
	I0204 18:19:32.651273  305713 addons.go:238] Setting addon registry=true in "addons-405803"
	I0204 18:19:32.651341  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.651854  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.656341  305713 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-405803"
	I0204 18:19:32.656421  305713 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-405803"
	I0204 18:19:32.656465  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.657021  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.668789  305713 addons.go:69] Setting default-storageclass=true in profile "addons-405803"
	I0204 18:19:32.668866  305713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-405803"
	I0204 18:19:32.669567  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.677219  305713 addons.go:69] Setting storage-provisioner=true in profile "addons-405803"
	I0204 18:19:32.677610  305713 addons.go:238] Setting addon storage-provisioner=true in "addons-405803"
	I0204 18:19:32.677916  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.678832  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.685133  305713 addons.go:69] Setting gcp-auth=true in profile "addons-405803"
	I0204 18:19:32.685234  305713 mustload.go:65] Loading cluster: addons-405803
	I0204 18:19:32.685487  305713 config.go:182] Loaded profile config "addons-405803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0204 18:19:32.685870  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.709858  305713 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-405803"
	I0204 18:19:32.709950  305713 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-405803"
	I0204 18:19:32.710451  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.716587  305713 addons.go:69] Setting ingress=true in profile "addons-405803"
	I0204 18:19:32.716725  305713 addons.go:238] Setting addon ingress=true in "addons-405803"
	I0204 18:19:32.716820  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.717612  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.746452  305713 addons.go:69] Setting volcano=true in profile "addons-405803"
	I0204 18:19:32.746494  305713 addons.go:238] Setting addon volcano=true in "addons-405803"
	I0204 18:19:32.746554  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.747072  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.761085  305713 addons.go:69] Setting volumesnapshots=true in profile "addons-405803"
	I0204 18:19:32.764237  305713 addons.go:238] Setting addon volumesnapshots=true in "addons-405803"
	I0204 18:19:32.764326  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.764834  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.766571  305713 addons.go:69] Setting ingress-dns=true in profile "addons-405803"
	I0204 18:19:32.766664  305713 addons.go:238] Setting addon ingress-dns=true in "addons-405803"
	I0204 18:19:32.766734  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:32.767316  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:32.784278  305713 out.go:177] * Verifying Kubernetes components...
	I0204 18:19:32.791245  305713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0204 18:19:32.802802  305713 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0204 18:19:32.845398  305713 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0204 18:19:32.851819  305713 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0204 18:19:32.851891  305713 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0204 18:19:32.852005  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:32.904447  305713 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0204 18:19:32.919796  305713 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0204 18:19:32.919816  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0204 18:19:32.919892  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:32.920291  305713 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0204 18:19:32.920347  305713 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0204 18:19:32.920430  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:32.944750  305713 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0204 18:19:32.944895  305713 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0204 18:19:32.952558  305713 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0204 18:19:32.952587  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0204 18:19:32.952683  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:32.953072  305713 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0204 18:19:32.953130  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0204 18:19:32.953267  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:32.982416  305713 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0204 18:19:32.985365  305713 out.go:177]   - Using image docker.io/registry:2.8.3
	I0204 18:19:32.992017  305713 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0204 18:19:32.992051  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0204 18:19:32.992144  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:32.993553  305713 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0204 18:19:32.998873  305713 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0204 18:19:33.008824  305713 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0204 18:19:33.014988  305713 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0204 18:19:33.020361  305713 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0204 18:19:33.025799  305713 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0204 18:19:33.028432  305713 addons.go:238] Setting addon default-storageclass=true in "addons-405803"
	I0204 18:19:33.028547  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:33.029027  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:33.029239  305713 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0204 18:19:33.034356  305713 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-405803"
	I0204 18:19:33.034400  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:33.034926  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	W0204 18:19:33.041323  305713 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0204 18:19:33.061343  305713 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0204 18:19:33.070602  305713 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0204 18:19:33.070748  305713 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0204 18:19:33.081258  305713 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0204 18:19:33.081292  305713 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0204 18:19:33.081381  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:33.082130  305713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0204 18:19:33.086132  305713 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0204 18:19:33.086250  305713 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0204 18:19:33.086379  305713 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0204 18:19:33.086394  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0204 18:19:33.086470  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:33.104405  305713 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0204 18:19:33.110301  305713 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0204 18:19:33.110333  305713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0204 18:19:33.110444  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:33.135508  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.136317  305713 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0204 18:19:33.136334  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0204 18:19:33.136455  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:33.138718  305713 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0204 18:19:33.138739  305713 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0204 18:19:33.138825  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:33.147856  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:33.152027  305713 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0204 18:19:33.155225  305713 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0204 18:19:33.155248  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0204 18:19:33.155322  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:33.213393  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.228051  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.243414  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.282403  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.315774  305713 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0204 18:19:33.318602  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.319648  305713 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0204 18:19:33.319667  305713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0204 18:19:33.319758  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:33.330921  305713 out.go:177]   - Using image docker.io/busybox:stable
	I0204 18:19:33.344992  305713 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0204 18:19:33.345018  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0204 18:19:33.345107  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:33.396445  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0204 18:19:33.396673  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.427692  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.430309  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.434031  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.434443  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.440510  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.505492  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.516555  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:33.745197  305713 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0204 18:19:33.745272  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0204 18:19:33.772731  305713 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0204 18:19:33.772805  305713 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0204 18:19:33.816406  305713 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.025053501s)
	I0204 18:19:33.816529  305713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0204 18:19:33.830938  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0204 18:19:33.835554  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0204 18:19:33.879719  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0204 18:19:33.883401  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0204 18:19:33.894402  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0204 18:19:33.904254  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0204 18:19:33.913861  305713 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0204 18:19:33.913935  305713 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0204 18:19:34.061664  305713 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0204 18:19:34.061747  305713 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0204 18:19:34.066164  305713 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0204 18:19:34.066236  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0204 18:19:34.069312  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0204 18:19:34.077856  305713 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0204 18:19:34.077943  305713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0204 18:19:34.130574  305713 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0204 18:19:34.130653  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0204 18:19:34.173077  305713 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0204 18:19:34.173155  305713 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0204 18:19:34.225979  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0204 18:19:34.301801  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0204 18:19:34.341914  305713 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0204 18:19:34.341949  305713 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0204 18:19:34.352334  305713 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0204 18:19:34.352411  305713 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0204 18:19:34.360914  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0204 18:19:34.363807  305713 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0204 18:19:34.363896  305713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0204 18:19:34.400882  305713 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0204 18:19:34.400972  305713 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0204 18:19:34.558578  305713 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0204 18:19:34.558673  305713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0204 18:19:34.561987  305713 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0204 18:19:34.562079  305713 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0204 18:19:34.602691  305713 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0204 18:19:34.602800  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0204 18:19:34.616103  305713 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0204 18:19:34.616301  305713 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0204 18:19:34.763066  305713 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0204 18:19:34.763171  305713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0204 18:19:34.769164  305713 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0204 18:19:34.769248  305713 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0204 18:19:34.788610  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0204 18:19:34.807853  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0204 18:19:34.916031  305713 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0204 18:19:34.916122  305713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0204 18:19:34.919440  305713 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0204 18:19:34.919518  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0204 18:19:35.118449  305713 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0204 18:19:35.118536  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0204 18:19:35.132103  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0204 18:19:35.272304  305713 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0204 18:19:35.272392  305713 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0204 18:19:35.384340  305713 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0204 18:19:35.384425  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0204 18:19:35.537026  305713 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0204 18:19:35.537098  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0204 18:19:35.750390  305713 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0204 18:19:35.750474  305713 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0204 18:19:35.896001  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0204 18:19:36.511071  305713 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.114325545s)
	I0204 18:19:36.511152  305713 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0204 18:19:36.511651  305713 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.695082011s)
	I0204 18:19:36.513246  305713 node_ready.go:35] waiting up to 6m0s for node "addons-405803" to be "Ready" ...
	I0204 18:19:36.589729  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.758713062s)
	I0204 18:19:37.485953  305713 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-405803" context rescaled to 1 replicas
	I0204 18:19:38.594673  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:38.689975  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.810163772s)
	I0204 18:19:38.690095  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.806624532s)
	I0204 18:19:38.690151  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.854312127s)
	I0204 18:19:40.175873  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.271516812s)
	I0204 18:19:40.175979  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.106594662s)
	I0204 18:19:40.176060  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.950060847s)
	I0204 18:19:40.176113  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.874284184s)
	I0204 18:19:40.176132  305713 addons.go:479] Verifying addon registry=true in "addons-405803"
	I0204 18:19:40.176311  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.387589775s)
	I0204 18:19:40.176266  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.815263784s)
	I0204 18:19:40.176705  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.368758405s)
	I0204 18:19:40.176726  305713 addons.go:479] Verifying addon metrics-server=true in "addons-405803"
	I0204 18:19:40.176823  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.282344669s)
	I0204 18:19:40.176851  305713 addons.go:479] Verifying addon ingress=true in "addons-405803"
	I0204 18:19:40.179623  305713 out.go:177] * Verifying registry addon...
	I0204 18:19:40.179713  305713 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-405803 service yakd-dashboard -n yakd-dashboard
	
	I0204 18:19:40.179814  305713 out.go:177] * Verifying ingress addon...
	I0204 18:19:40.184369  305713 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0204 18:19:40.185364  305713 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0204 18:19:40.192917  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.060692678s)
	W0204 18:19:40.193006  305713 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0204 18:19:40.193073  305713 retry.go:31] will retry after 181.080267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0204 18:19:40.211362  305713 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0204 18:19:40.211477  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0204 18:19:40.221029  305713 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0204 18:19:40.224707  305713 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0204 18:19:40.224783  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:40.374784  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0204 18:19:40.674339  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.778214066s)
	I0204 18:19:40.674422  305713 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-405803"
	I0204 18:19:40.679418  305713 out.go:177] * Verifying csi-hostpath-driver addon...
	I0204 18:19:40.683073  305713 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0204 18:19:40.711118  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:40.711987  305713 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0204 18:19:40.712009  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:40.713135  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:41.018380  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:41.186916  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:41.187429  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:41.189931  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:41.687412  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:41.690048  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:41.690373  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:42.190619  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:42.191058  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:42.191257  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:42.687353  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:42.689532  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:42.690094  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:43.139643  305713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.764760333s)
	I0204 18:19:43.188297  305713 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0204 18:19:43.188395  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:43.190469  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:43.190713  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:43.193285  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:43.207489  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:43.307452  305713 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0204 18:19:43.326818  305713 addons.go:238] Setting addon gcp-auth=true in "addons-405803"
	I0204 18:19:43.326869  305713 host.go:66] Checking if "addons-405803" exists ...
	I0204 18:19:43.327325  305713 cli_runner.go:164] Run: docker container inspect addons-405803 --format={{.State.Status}}
	I0204 18:19:43.344213  305713 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0204 18:19:43.344273  305713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-405803
	I0204 18:19:43.365285  305713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/addons-405803/id_rsa Username:docker}
	I0204 18:19:43.466869  305713 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0204 18:19:43.469676  305713 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0204 18:19:43.472499  305713 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0204 18:19:43.472520  305713 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0204 18:19:43.490891  305713 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0204 18:19:43.490915  305713 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0204 18:19:43.509252  305713 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0204 18:19:43.509274  305713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0204 18:19:43.517933  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:43.529472  305713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0204 18:19:43.692368  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:43.693086  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:43.693436  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:44.077003  305713 addons.go:479] Verifying addon gcp-auth=true in "addons-405803"
	I0204 18:19:44.080029  305713 out.go:177] * Verifying gcp-auth addon...
	I0204 18:19:44.083889  305713 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0204 18:19:44.088744  305713 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0204 18:19:44.088768  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:44.188951  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:44.190010  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:44.190738  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:44.588582  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:44.686526  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:44.688493  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:44.689246  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:45.088465  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:45.189704  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:45.192553  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:45.192558  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:45.588459  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:45.687545  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:45.689153  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:45.690148  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:46.016760  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:46.087901  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:46.187474  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:46.187904  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:46.190181  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:46.587019  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:46.686450  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:46.686943  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:46.689280  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:47.087978  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:47.189198  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:47.189583  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:47.189718  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:47.587844  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:47.687526  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:47.688000  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:47.690295  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:48.016897  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:48.087073  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:48.187021  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:48.189712  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:48.190438  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:48.587797  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:48.688399  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:48.689069  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:48.690506  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:49.087629  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:49.187079  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:49.189801  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:49.190473  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:49.588777  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:49.688753  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:49.688893  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:49.691206  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:50.018247  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:50.088046  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:50.186450  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:50.189513  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:50.190474  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:50.587126  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:50.687561  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:50.688453  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:50.690140  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:51.088271  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:51.187262  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:51.188720  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:51.189984  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:51.586906  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:51.686822  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:51.688549  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:51.691959  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:52.087658  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:52.187028  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:52.188039  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:52.189895  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:52.516878  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:52.587405  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:52.687604  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:52.689047  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:52.690049  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:53.087738  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:53.186833  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:53.188764  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:53.190709  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:53.587895  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:53.688263  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:53.688326  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:53.689578  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:54.087353  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:54.186737  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:54.188007  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:54.189118  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:54.516921  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:54.588689  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:54.687370  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:54.689863  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:54.690871  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:55.088169  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:55.187159  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:55.189448  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:55.189890  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:55.587768  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:55.687081  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:55.687690  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:55.690014  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:56.088521  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:56.186734  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:56.187966  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:56.189602  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:56.516993  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:56.587059  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:56.686548  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:56.687319  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:56.689847  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:57.088328  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:57.186548  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:57.187460  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:57.189867  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:57.587828  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:57.687035  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:57.687666  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:57.689918  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:58.088107  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:58.186427  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:58.187356  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:58.189356  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:58.587053  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:58.686294  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:58.687447  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:58.689335  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:59.017151  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:19:59.088081  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:59.186426  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:59.188319  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:59.190400  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:19:59.587689  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:19:59.687795  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:19:59.688294  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:19:59.690566  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:00.095333  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:00.196409  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:00.196632  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:00.197300  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:00.588447  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:00.687240  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:00.688731  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:00.689927  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:01.017348  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:20:01.087498  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:01.186939  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:01.190266  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:01.190712  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:01.590426  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:01.687801  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:01.688358  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:01.690884  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:02.088251  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:02.189372  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:02.190822  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:02.193037  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:02.587039  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:02.686599  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:02.688443  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:02.689751  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:03.089657  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:03.187112  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:03.188390  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:03.190350  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:03.517235  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:20:03.588260  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:03.687043  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:03.688069  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:03.689714  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:04.087352  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:04.187213  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:04.188233  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:04.189774  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:04.587957  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:04.686817  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:04.687754  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:04.690079  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:05.088273  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:05.188479  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:05.188809  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:05.190561  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:05.587512  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:05.686499  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:05.688846  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:05.689727  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:06.016536  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:20:06.087633  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:06.186988  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:06.189439  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:06.190153  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:06.587177  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:06.686928  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:06.688913  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:06.689380  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:07.087478  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:07.189082  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:07.189905  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:07.190926  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:07.587383  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:07.688413  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:07.689854  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:07.690640  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:08.017739  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:20:08.087671  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:08.187354  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:08.188240  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:08.190906  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:08.587644  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:08.687850  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:08.689874  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:08.690931  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:09.087407  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:09.186899  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:09.189219  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:09.189646  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:09.587747  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:09.687878  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:09.688616  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:09.690205  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:10.088131  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:10.186893  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:10.188012  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:10.190101  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:10.516448  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:20:10.587026  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:10.688666  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:10.690607  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:10.690837  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:11.088861  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:11.186468  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:11.189787  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:11.190074  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:11.587392  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:11.688003  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:11.688552  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:11.690943  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:12.087900  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:12.187746  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:12.188399  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:12.190787  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:12.517551  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:20:12.587532  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:12.687415  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:12.688594  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:12.690337  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:13.087463  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:13.188349  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:13.188956  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:13.190489  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:13.587024  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:13.686245  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:13.688357  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:13.690116  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:14.087416  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:14.187676  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:14.190712  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:14.191596  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:14.587750  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:14.687674  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:14.689069  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:14.690181  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:15.018225  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:20:15.088241  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:15.188682  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:15.190404  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:15.191995  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:15.588004  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:15.687963  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:15.688275  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:15.690452  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:16.087649  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:16.186738  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:16.188581  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:16.189721  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:16.587718  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:16.689077  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:16.689734  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:16.691214  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:17.088002  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:17.186587  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:17.188687  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:17.189923  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:17.516974  305713 node_ready.go:53] node "addons-405803" has status "Ready":"False"
	I0204 18:20:17.587175  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:17.687344  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:17.688953  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:17.689861  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:18.088043  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:18.186355  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:18.188196  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:18.189581  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:18.587608  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:18.687193  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:18.689423  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:18.690518  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:19.088423  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:19.187352  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:19.189679  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:19.190846  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:19.519808  305713 node_ready.go:49] node "addons-405803" has status "Ready":"True"
	I0204 18:20:19.519875  305713 node_ready.go:38] duration metric: took 43.006536968s for node "addons-405803" to be "Ready" ...
	I0204 18:20:19.519900  305713 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0204 18:20:19.536776  305713 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wnpwg" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:19.605101  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:19.851674  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:19.853199  305713 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0204 18:20:19.853227  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:19.854072  305713 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0204 18:20:19.854103  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:20.095528  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:20.195274  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:20.197019  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:20.197486  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:20.588934  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:20.690054  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:20.691265  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:20.691628  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:21.091280  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:21.197060  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:21.198126  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:21.199693  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:21.544042  305713 pod_ready.go:103] pod "coredns-668d6bf9bc-wnpwg" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:21.600597  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:21.694631  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:21.697384  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:21.699101  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:22.091813  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:22.194756  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:22.196327  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:22.199376  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:22.544869  305713 pod_ready.go:93] pod "coredns-668d6bf9bc-wnpwg" in "kube-system" namespace has status "Ready":"True"
	I0204 18:20:22.544894  305713 pod_ready.go:82] duration metric: took 3.008015919s for pod "coredns-668d6bf9bc-wnpwg" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.544928  305713 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-405803" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.557152  305713 pod_ready.go:93] pod "etcd-addons-405803" in "kube-system" namespace has status "Ready":"True"
	I0204 18:20:22.557176  305713 pod_ready.go:82] duration metric: took 12.23759ms for pod "etcd-addons-405803" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.557192  305713 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-405803" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.574927  305713 pod_ready.go:93] pod "kube-apiserver-addons-405803" in "kube-system" namespace has status "Ready":"True"
	I0204 18:20:22.574996  305713 pod_ready.go:82] duration metric: took 17.781784ms for pod "kube-apiserver-addons-405803" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.575023  305713 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-405803" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.580524  305713 pod_ready.go:93] pod "kube-controller-manager-addons-405803" in "kube-system" namespace has status "Ready":"True"
	I0204 18:20:22.580551  305713 pod_ready.go:82] duration metric: took 5.5188ms for pod "kube-controller-manager-addons-405803" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.580566  305713 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kt9pn" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.587000  305713 pod_ready.go:93] pod "kube-proxy-kt9pn" in "kube-system" namespace has status "Ready":"True"
	I0204 18:20:22.587042  305713 pod_ready.go:82] duration metric: took 6.452269ms for pod "kube-proxy-kt9pn" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.587054  305713 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-405803" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.588396  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:22.687808  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:22.689873  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:22.692120  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:22.942334  305713 pod_ready.go:93] pod "kube-scheduler-addons-405803" in "kube-system" namespace has status "Ready":"True"
	I0204 18:20:22.942359  305713 pod_ready.go:82] duration metric: took 355.296255ms for pod "kube-scheduler-addons-405803" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:22.942371  305713 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-nsn5c" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:23.091357  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:23.188708  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:23.191708  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:23.193110  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:23.587865  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:23.689383  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:23.690943  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:23.693701  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:24.089130  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:24.189389  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:24.191499  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:24.192233  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:24.587303  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:24.689467  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:24.691962  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:24.694516  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:24.962684  305713 pod_ready.go:103] pod "metrics-server-7fbb699795-nsn5c" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:25.088359  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:25.194063  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:25.195014  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:25.196267  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:25.587695  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:25.691602  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:25.693050  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:25.699219  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:26.134730  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:26.235140  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:26.236042  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:26.237646  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:26.589447  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:26.696378  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:26.698321  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:26.699870  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:26.974022  305713 pod_ready.go:103] pod "metrics-server-7fbb699795-nsn5c" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:27.088614  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:27.194163  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:27.196566  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:27.198742  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:27.588204  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:27.690679  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:27.691529  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:27.692030  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:28.087624  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:28.217926  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:28.218650  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:28.219502  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:28.449127  305713 pod_ready.go:93] pod "metrics-server-7fbb699795-nsn5c" in "kube-system" namespace has status "Ready":"True"
	I0204 18:20:28.449152  305713 pod_ready.go:82] duration metric: took 5.506753409s for pod "metrics-server-7fbb699795-nsn5c" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:28.449165  305713 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace to be "Ready" ...
	I0204 18:20:28.588358  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:28.688163  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:28.690202  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:28.692057  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:29.087637  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:29.188984  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:29.189601  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:29.192865  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:29.587739  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:29.688998  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:29.692100  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:29.692826  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:30.096893  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:30.192460  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:30.193304  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:30.194706  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:30.456905  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:30.587673  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:30.696678  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:30.698614  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:30.700749  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:31.088360  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:31.190469  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:31.191768  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:31.195566  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:31.600676  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:31.696465  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:31.697050  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:31.698420  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:32.089085  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:32.190634  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:32.197455  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:32.198197  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:32.457014  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:32.590605  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:32.690660  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:32.692405  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:32.693704  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:33.091624  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:33.189816  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:33.190323  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:33.191641  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:33.588613  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:33.693467  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:33.695756  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:33.707754  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:34.091760  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:34.191948  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:34.192534  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:34.194425  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:34.589328  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:34.692970  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:34.695421  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:34.699153  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:34.957576  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:35.094292  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:35.215503  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:35.216715  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:35.218340  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:35.587624  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:35.688669  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:35.690718  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:35.691151  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:36.087149  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:36.190862  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:36.192750  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:36.194250  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:36.588814  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:36.693316  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:36.695082  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:36.696369  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:37.100941  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:37.191599  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:37.192623  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:37.194860  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:37.454828  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:37.589947  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:37.691170  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:37.692394  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:37.693204  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:38.087725  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:38.189073  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:38.189661  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:38.191298  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:38.588096  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:38.694248  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:38.694455  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:38.696253  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:39.087258  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:39.193544  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:39.194556  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:39.196522  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:39.455505  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:39.593477  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:39.690191  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:39.690947  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:39.692279  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:40.088269  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:40.190066  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:40.191021  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:40.192141  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:40.588609  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:40.689311  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:40.696471  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:40.697023  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:41.090249  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:41.192551  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:41.194715  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:41.217749  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:41.457294  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:41.588715  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:41.689710  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:41.691207  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:41.694967  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:42.087904  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:42.193236  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:42.195167  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:42.196469  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:42.588484  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:42.690551  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:42.690992  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:42.692446  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:43.087821  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:43.195122  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:43.196837  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:43.199786  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:43.588328  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:43.693134  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:43.694498  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:43.695307  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:43.959495  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:44.088099  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:44.190440  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:44.190967  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:44.191393  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:44.588001  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:44.690211  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:44.691355  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:44.692799  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:45.093186  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:45.193452  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:45.196361  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:45.198472  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:45.588475  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:45.692676  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:45.694648  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:45.703055  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:46.087752  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:46.201244  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:46.202690  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:46.204452  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:46.456272  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:46.588621  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:46.693712  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:46.694719  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:46.703525  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:47.097872  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:47.187867  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:47.191655  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:47.192807  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:47.588697  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:47.688752  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:47.690385  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:47.691548  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:48.088320  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:48.205428  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:48.206945  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:48.208476  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:48.589472  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:48.690147  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:48.691472  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:48.693017  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:48.956061  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:49.087739  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:49.191885  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:49.199812  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:49.207104  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:49.588091  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:49.689573  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:49.697483  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:49.698435  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:50.087799  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:50.188037  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:50.191077  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:50.193614  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:50.587449  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:50.688012  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:50.691105  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:50.692124  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:50.958653  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:51.088518  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:51.191781  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:51.197813  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:51.200402  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:51.593728  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:51.691886  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:51.692853  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:51.693731  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:52.088321  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:52.190237  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:52.191199  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:52.192951  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:52.588299  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:52.689434  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:52.690840  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:52.693268  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:52.965842  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:53.087752  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:53.191776  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:53.193604  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:53.194759  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:53.588145  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:53.697459  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:53.699238  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:53.701054  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:54.094479  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:54.192163  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:54.193640  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:54.202623  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:54.588718  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:54.692643  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:54.693988  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:54.695376  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:55.089105  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:55.189174  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:55.189727  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:55.192994  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:55.456790  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:55.590703  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:55.694340  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:55.696670  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:55.698009  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:56.088364  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:56.188774  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:56.191855  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:56.192828  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:56.588620  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:56.692997  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:56.695392  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:56.697852  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:57.088731  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:57.192257  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:57.193408  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:57.194637  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:57.596095  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:57.704425  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:57.705313  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:57.706510  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:57.956342  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:20:58.088024  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:58.193680  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:58.193976  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:58.195729  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:58.588024  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:58.690832  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:58.691967  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:58.692670  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:59.087594  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:59.188705  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:59.192501  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:59.193785  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:59.587959  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:20:59.694225  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:20:59.696916  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:20:59.698425  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:20:59.957879  305713 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"False"
	I0204 18:21:00.094267  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:00.206520  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:00.213264  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:00.214833  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:00.590428  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:00.688355  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:00.690173  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:00.691623  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:00.956112  305713 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace has status "Ready":"True"
	I0204 18:21:00.956142  305713 pod_ready.go:82] duration metric: took 32.506966508s for pod "nvidia-device-plugin-daemonset-khhzw" in "kube-system" namespace to be "Ready" ...
	I0204 18:21:00.956167  305713 pod_ready.go:39] duration metric: took 41.436215671s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0204 18:21:00.956215  305713 api_server.go:52] waiting for apiserver process to appear ...
	I0204 18:21:00.956248  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0204 18:21:00.956315  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0204 18:21:00.997831  305713 cri.go:89] found id: "82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e"
	I0204 18:21:00.997853  305713 cri.go:89] found id: ""
	I0204 18:21:00.997862  305713 logs.go:282] 1 containers: [82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e]
	I0204 18:21:00.997938  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:01.001674  305713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0204 18:21:01.001764  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0204 18:21:01.045032  305713 cri.go:89] found id: "4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f"
	I0204 18:21:01.045056  305713 cri.go:89] found id: ""
	I0204 18:21:01.045064  305713 logs.go:282] 1 containers: [4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f]
	I0204 18:21:01.045147  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:01.048929  305713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0204 18:21:01.049032  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0204 18:21:01.091308  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:01.093528  305713 cri.go:89] found id: "7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f"
	I0204 18:21:01.093549  305713 cri.go:89] found id: ""
	I0204 18:21:01.093558  305713 logs.go:282] 1 containers: [7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f]
	I0204 18:21:01.093666  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:01.097559  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0204 18:21:01.097646  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0204 18:21:01.142754  305713 cri.go:89] found id: "817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0"
	I0204 18:21:01.142783  305713 cri.go:89] found id: ""
	I0204 18:21:01.142808  305713 logs.go:282] 1 containers: [817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0]
	I0204 18:21:01.143075  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:01.147514  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0204 18:21:01.147615  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0204 18:21:01.190079  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:01.193777  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:01.194331  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:01.200253  305713 cri.go:89] found id: "859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13"
	I0204 18:21:01.200285  305713 cri.go:89] found id: ""
	I0204 18:21:01.200294  305713 logs.go:282] 1 containers: [859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13]
	I0204 18:21:01.200369  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:01.204561  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0204 18:21:01.204651  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0204 18:21:01.246350  305713 cri.go:89] found id: "83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e"
	I0204 18:21:01.246428  305713 cri.go:89] found id: ""
	I0204 18:21:01.246443  305713 logs.go:282] 1 containers: [83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e]
	I0204 18:21:01.246507  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:01.250437  305713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0204 18:21:01.250533  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0204 18:21:01.295425  305713 cri.go:89] found id: "a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0"
	I0204 18:21:01.295451  305713 cri.go:89] found id: ""
	I0204 18:21:01.295461  305713 logs.go:282] 1 containers: [a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0]
	I0204 18:21:01.295552  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:01.300112  305713 logs.go:123] Gathering logs for kube-apiserver [82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e] ...
	I0204 18:21:01.300150  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e"
	I0204 18:21:01.370019  305713 logs.go:123] Gathering logs for coredns [7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f] ...
	I0204 18:21:01.370058  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f"
	I0204 18:21:01.411757  305713 logs.go:123] Gathering logs for kube-controller-manager [83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e] ...
	I0204 18:21:01.411799  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e"
	I0204 18:21:01.488221  305713 logs.go:123] Gathering logs for CRI-O ...
	I0204 18:21:01.488258  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0204 18:21:01.584364  305713 logs.go:123] Gathering logs for kubelet ...
	I0204 18:21:01.584405  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0204 18:21:01.589412  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0204 18:21:01.651556  305713 logs.go:138] Found kubelet problem: Feb 04 18:19:31 addons-405803 kubelet[1499]: I0204 18:19:31.710730    1499 status_manager.go:890] "Failed to get status for pod" podUID="663ae19f-be2c-495a-b227-c4dc10ed7fe9" pod="kube-system/kube-proxy-kt9pn" err="pods \"kube-proxy-kt9pn\" is forbidden: User \"system:node:addons-405803\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-405803' and this object"
	W0204 18:21:01.677754  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385468    1499 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-405803' and this object
	W0204 18:21:01.678024  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385533    1499 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:01.678218  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385589    1499 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-405803' and this object
	W0204 18:21:01.678450  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385603    1499 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:01.678635  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385662    1499 reflector.go:569] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-405803' and this object
	W0204 18:21:01.678861  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385674    1499 reflector.go:166] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:01.679041  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385747    1499 reflector.go:569] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-405803' and this object
	W0204 18:21:01.679263  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385760    1499 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:01.679434  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.391056    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-405803' and this object
	W0204 18:21:01.679656  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.391096    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	I0204 18:21:01.691621  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:01.693814  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:01.696239  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:01.709877  305713 logs.go:123] Gathering logs for dmesg ...
	I0204 18:21:01.709903  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0204 18:21:01.728395  305713 logs.go:123] Gathering logs for describe nodes ...
	I0204 18:21:01.728426  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0204 18:21:01.969032  305713 logs.go:123] Gathering logs for kindnet [a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0] ...
	I0204 18:21:01.969120  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0"
	I0204 18:21:02.038482  305713 logs.go:123] Gathering logs for container status ...
	I0204 18:21:02.038669  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0204 18:21:02.088502  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:02.112663  305713 logs.go:123] Gathering logs for etcd [4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f] ...
	I0204 18:21:02.112753  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f"
	I0204 18:21:02.196815  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:02.197498  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:02.198502  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:02.213898  305713 logs.go:123] Gathering logs for kube-scheduler [817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0] ...
	I0204 18:21:02.213935  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0"
	I0204 18:21:02.287253  305713 logs.go:123] Gathering logs for kube-proxy [859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13] ...
	I0204 18:21:02.287292  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13"
	I0204 18:21:02.360721  305713 out.go:358] Setting ErrFile to fd 2...
	I0204 18:21:02.360745  305713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0204 18:21:02.360802  305713 out.go:270] X Problems detected in kubelet:
	W0204 18:21:02.360814  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385674    1499 reflector.go:166] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:02.360824  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385747    1499 reflector.go:569] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-405803' and this object
	W0204 18:21:02.360833  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385760    1499 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:02.360838  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.391056    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-405803' and this object
	W0204 18:21:02.360844  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.391096    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	I0204 18:21:02.360855  305713 out.go:358] Setting ErrFile to fd 2...
	I0204 18:21:02.360862  305713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:21:02.588494  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:02.695230  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:02.695619  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:02.696694  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:03.087664  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:03.188329  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:03.191634  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:03.192379  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:03.588536  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:03.688780  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:03.690132  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:03.692144  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:04.088399  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:04.188726  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:04.189658  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:04.192644  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:04.587728  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:04.689300  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:04.690802  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:04.696197  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:05.088406  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:05.189677  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:05.191000  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:05.194058  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:05.590089  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:05.691654  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:05.692682  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:05.693598  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:06.088391  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:06.191434  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:06.198900  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:06.199864  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:06.589555  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:06.696241  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:06.698102  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:06.706487  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:07.087730  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:07.188973  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:07.191094  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:07.192948  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:07.587787  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:07.688212  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:07.690470  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0204 18:21:07.691672  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:08.087505  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:08.188787  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:08.189600  305713 kapi.go:107] duration metric: took 1m28.005231259s to wait for kubernetes.io/minikube-addons=registry ...
	I0204 18:21:08.192434  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:08.588742  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:08.690855  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:08.696498  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:09.088923  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:09.190738  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:09.196299  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:09.588359  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:09.687847  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:09.690217  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:10.090498  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:10.189872  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:10.198436  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:10.588950  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:10.689816  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:10.691267  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:11.092858  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:11.195681  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:11.196798  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:11.587397  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:11.687968  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:11.690755  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:12.088746  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:12.187691  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:12.191426  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:12.361616  305713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0204 18:21:12.376961  305713 api_server.go:72] duration metric: took 1m39.750404899s to wait for apiserver process to appear ...
	I0204 18:21:12.376995  305713 api_server.go:88] waiting for apiserver healthz status ...
	I0204 18:21:12.377049  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0204 18:21:12.377123  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0204 18:21:12.420716  305713 cri.go:89] found id: "82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e"
	I0204 18:21:12.420779  305713 cri.go:89] found id: ""
	I0204 18:21:12.420802  305713 logs.go:282] 1 containers: [82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e]
	I0204 18:21:12.420893  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:12.425367  305713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0204 18:21:12.425494  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0204 18:21:12.466447  305713 cri.go:89] found id: "4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f"
	I0204 18:21:12.466470  305713 cri.go:89] found id: ""
	I0204 18:21:12.466479  305713 logs.go:282] 1 containers: [4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f]
	I0204 18:21:12.466550  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:12.470708  305713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0204 18:21:12.470851  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0204 18:21:12.542414  305713 cri.go:89] found id: "7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f"
	I0204 18:21:12.542438  305713 cri.go:89] found id: ""
	I0204 18:21:12.542447  305713 logs.go:282] 1 containers: [7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f]
	I0204 18:21:12.542504  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:12.547462  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0204 18:21:12.547537  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0204 18:21:12.594389  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:12.638300  305713 cri.go:89] found id: "817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0"
	I0204 18:21:12.638323  305713 cri.go:89] found id: ""
	I0204 18:21:12.638332  305713 logs.go:282] 1 containers: [817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0]
	I0204 18:21:12.638391  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:12.647730  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0204 18:21:12.647806  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0204 18:21:12.701081  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:12.703444  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:12.708673  305713 cri.go:89] found id: "859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13"
	I0204 18:21:12.708711  305713 cri.go:89] found id: ""
	I0204 18:21:12.708720  305713 logs.go:282] 1 containers: [859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13]
	I0204 18:21:12.708815  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:12.712465  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0204 18:21:12.712568  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0204 18:21:12.756513  305713 cri.go:89] found id: "83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e"
	I0204 18:21:12.756536  305713 cri.go:89] found id: ""
	I0204 18:21:12.756544  305713 logs.go:282] 1 containers: [83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e]
	I0204 18:21:12.756621  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:12.760264  305713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0204 18:21:12.760405  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0204 18:21:12.807061  305713 cri.go:89] found id: "a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0"
	I0204 18:21:12.807086  305713 cri.go:89] found id: ""
	I0204 18:21:12.807096  305713 logs.go:282] 1 containers: [a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0]
	I0204 18:21:12.807208  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:12.812093  305713 logs.go:123] Gathering logs for coredns [7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f] ...
	I0204 18:21:12.812117  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f"
	I0204 18:21:12.854415  305713 logs.go:123] Gathering logs for kube-proxy [859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13] ...
	I0204 18:21:12.854445  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13"
	I0204 18:21:12.894914  305713 logs.go:123] Gathering logs for CRI-O ...
	I0204 18:21:12.894942  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0204 18:21:13.008561  305713 logs.go:123] Gathering logs for kubelet ...
	I0204 18:21:13.008605  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0204 18:21:13.073472  305713 logs.go:138] Found kubelet problem: Feb 04 18:19:31 addons-405803 kubelet[1499]: I0204 18:19:31.710730    1499 status_manager.go:890] "Failed to get status for pod" podUID="663ae19f-be2c-495a-b227-c4dc10ed7fe9" pod="kube-system/kube-proxy-kt9pn" err="pods \"kube-proxy-kt9pn\" is forbidden: User \"system:node:addons-405803\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-405803' and this object"
	I0204 18:21:13.104830  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0204 18:21:13.105015  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385468    1499 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-405803' and this object
	W0204 18:21:13.106123  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385533    1499 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:13.106341  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385589    1499 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-405803' and this object
	W0204 18:21:13.106590  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385603    1499 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:13.106808  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385662    1499 reflector.go:569] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-405803' and this object
	W0204 18:21:13.107547  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385674    1499 reflector.go:166] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:13.107806  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385747    1499 reflector.go:569] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-405803' and this object
	W0204 18:21:13.108677  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385760    1499 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:13.109012  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.391056    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-405803' and this object
	W0204 18:21:13.109276  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.391096    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	I0204 18:21:13.150770  305713 logs.go:123] Gathering logs for dmesg ...
	I0204 18:21:13.150849  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0204 18:21:13.171370  305713 logs.go:123] Gathering logs for describe nodes ...
	I0204 18:21:13.171443  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0204 18:21:13.204574  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:13.208697  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:13.318946  305713 logs.go:123] Gathering logs for etcd [4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f] ...
	I0204 18:21:13.319019  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f"
	I0204 18:21:13.442642  305713 logs.go:123] Gathering logs for container status ...
	I0204 18:21:13.442756  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0204 18:21:13.541482  305713 logs.go:123] Gathering logs for kube-apiserver [82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e] ...
	I0204 18:21:13.541528  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e"
	I0204 18:21:13.588761  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:13.670213  305713 logs.go:123] Gathering logs for kube-scheduler [817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0] ...
	I0204 18:21:13.670256  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0"
	I0204 18:21:13.689429  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:13.692571  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:13.748737  305713 logs.go:123] Gathering logs for kube-controller-manager [83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e] ...
	I0204 18:21:13.748775  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e"
	I0204 18:21:13.860054  305713 logs.go:123] Gathering logs for kindnet [a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0] ...
	I0204 18:21:13.860090  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0"
	I0204 18:21:13.912864  305713 out.go:358] Setting ErrFile to fd 2...
	I0204 18:21:13.912896  305713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0204 18:21:13.912984  305713 out.go:270] X Problems detected in kubelet:
	W0204 18:21:13.913000  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385674    1499 reflector.go:166] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:13.913025  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385747    1499 reflector.go:569] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-405803' and this object
	W0204 18:21:13.913150  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385760    1499 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:13.913166  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.391056    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-405803' and this object
	W0204 18:21:13.913173  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.391096    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	I0204 18:21:13.913197  305713 out.go:358] Setting ErrFile to fd 2...
	I0204 18:21:13.913205  305713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:21:14.087929  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:14.193953  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:14.195456  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:14.588894  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:14.688784  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:14.691995  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:15.087804  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:15.189833  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:15.193726  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:15.589376  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:15.690851  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:15.692299  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:16.088731  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:16.199455  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:16.201604  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:16.593088  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:16.692101  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:16.693937  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:17.090259  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:17.190533  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:17.194387  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:17.588378  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:17.700491  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:17.714491  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:18.090993  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:18.203074  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:18.204292  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:18.588014  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:18.688661  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:18.691446  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:19.093945  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:19.188392  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:19.190972  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:19.588538  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:19.689699  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:19.698529  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:20.088941  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:20.190662  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:20.195129  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:20.588504  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:20.689545  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:20.693086  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:21.088481  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:21.193471  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:21.197428  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:21.587786  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:21.689892  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:21.694330  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:22.090813  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:22.189597  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:22.192299  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:22.588561  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:22.688294  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:22.695699  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:23.088726  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:23.195406  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:23.196978  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:23.587878  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:23.689344  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:23.692456  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:23.914881  305713 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0204 18:21:23.923448  305713 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0204 18:21:23.924623  305713 api_server.go:141] control plane version: v1.32.1
	I0204 18:21:23.924655  305713 api_server.go:131] duration metric: took 11.547648198s to wait for apiserver health ...
	I0204 18:21:23.924664  305713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0204 18:21:23.924686  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0204 18:21:23.924748  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0204 18:21:24.003321  305713 cri.go:89] found id: "82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e"
	I0204 18:21:24.003348  305713 cri.go:89] found id: ""
	I0204 18:21:24.003358  305713 logs.go:282] 1 containers: [82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e]
	I0204 18:21:24.003430  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:24.009312  305713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0204 18:21:24.009392  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0204 18:21:24.087512  305713 cri.go:89] found id: "4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f"
	I0204 18:21:24.087533  305713 cri.go:89] found id: ""
	I0204 18:21:24.087542  305713 logs.go:282] 1 containers: [4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f]
	I0204 18:21:24.087602  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:24.113406  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:24.115646  305713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0204 18:21:24.115747  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0204 18:21:24.210568  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:24.212778  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:24.219093  305713 cri.go:89] found id: "7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f"
	I0204 18:21:24.219113  305713 cri.go:89] found id: ""
	I0204 18:21:24.219122  305713 logs.go:282] 1 containers: [7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f]
	I0204 18:21:24.219180  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:24.223582  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0204 18:21:24.223655  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0204 18:21:24.279005  305713 cri.go:89] found id: "817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0"
	I0204 18:21:24.279029  305713 cri.go:89] found id: ""
	I0204 18:21:24.279038  305713 logs.go:282] 1 containers: [817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0]
	I0204 18:21:24.279096  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:24.283058  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0204 18:21:24.283133  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0204 18:21:24.359549  305713 cri.go:89] found id: "859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13"
	I0204 18:21:24.359584  305713 cri.go:89] found id: ""
	I0204 18:21:24.359594  305713 logs.go:282] 1 containers: [859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13]
	I0204 18:21:24.359656  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:24.365555  305713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0204 18:21:24.365633  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0204 18:21:24.424100  305713 cri.go:89] found id: "83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e"
	I0204 18:21:24.424121  305713 cri.go:89] found id: ""
	I0204 18:21:24.424130  305713 logs.go:282] 1 containers: [83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e]
	I0204 18:21:24.424201  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:24.431425  305713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0204 18:21:24.431501  305713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0204 18:21:24.483752  305713 cri.go:89] found id: "a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0"
	I0204 18:21:24.483774  305713 cri.go:89] found id: ""
	I0204 18:21:24.483782  305713 logs.go:282] 1 containers: [a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0]
	I0204 18:21:24.483845  305713 ssh_runner.go:195] Run: which crictl
	I0204 18:21:24.491054  305713 logs.go:123] Gathering logs for kindnet [a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0] ...
	I0204 18:21:24.491077  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0"
	I0204 18:21:24.577474  305713 logs.go:123] Gathering logs for container status ...
	I0204 18:21:24.577511  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0204 18:21:24.587982  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:24.660113  305713 logs.go:123] Gathering logs for kubelet ...
	I0204 18:21:24.660142  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0204 18:21:24.690014  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:24.691877  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0204 18:21:24.728046  305713 logs.go:138] Found kubelet problem: Feb 04 18:19:31 addons-405803 kubelet[1499]: I0204 18:19:31.710730    1499 status_manager.go:890] "Failed to get status for pod" podUID="663ae19f-be2c-495a-b227-c4dc10ed7fe9" pod="kube-system/kube-proxy-kt9pn" err="pods \"kube-proxy-kt9pn\" is forbidden: User \"system:node:addons-405803\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-405803' and this object"
	W0204 18:21:24.754857  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385468    1499 reflector.go:569] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-405803' and this object
	W0204 18:21:24.755107  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385533    1499 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:24.755295  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385589    1499 reflector.go:569] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-405803' and this object
	W0204 18:21:24.755529  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385603    1499 reflector.go:166] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:24.755713  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385662    1499 reflector.go:569] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-405803' and this object
	W0204 18:21:24.755936  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385674    1499 reflector.go:166] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:24.756242  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385747    1499 reflector.go:569] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-405803' and this object
	W0204 18:21:24.756489  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385760    1499 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:24.756686  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.391056    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-405803' and this object
	W0204 18:21:24.756926  305713 logs.go:138] Found kubelet problem: Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.391096    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	I0204 18:21:24.795120  305713 logs.go:123] Gathering logs for describe nodes ...
	I0204 18:21:24.795166  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0204 18:21:24.982328  305713 logs.go:123] Gathering logs for kube-apiserver [82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e] ...
	I0204 18:21:24.982425  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e"
	I0204 18:21:25.083679  305713 logs.go:123] Gathering logs for kube-scheduler [817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0] ...
	I0204 18:21:25.083763  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0"
	I0204 18:21:25.100516  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:25.149168  305713 logs.go:123] Gathering logs for kube-proxy [859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13] ...
	I0204 18:21:25.149208  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13"
	I0204 18:21:25.215592  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:25.221744  305713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0204 18:21:25.234897  305713 logs.go:123] Gathering logs for kube-controller-manager [83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e] ...
	I0204 18:21:25.234936  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e"
	I0204 18:21:25.322583  305713 logs.go:123] Gathering logs for dmesg ...
	I0204 18:21:25.322622  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0204 18:21:25.341728  305713 logs.go:123] Gathering logs for etcd [4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f] ...
	I0204 18:21:25.341760  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f"
	I0204 18:21:25.407767  305713 logs.go:123] Gathering logs for coredns [7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f] ...
	I0204 18:21:25.407802  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f"
	I0204 18:21:25.453958  305713 logs.go:123] Gathering logs for CRI-O ...
	I0204 18:21:25.453986  305713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0204 18:21:25.579146  305713 out.go:358] Setting ErrFile to fd 2...
	I0204 18:21:25.579221  305713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0204 18:21:25.579311  305713 out.go:270] X Problems detected in kubelet:
	W0204 18:21:25.579478  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385674    1499 reflector.go:166] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:25.579526  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.385747    1499 reflector.go:569] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-405803' and this object
	W0204 18:21:25.579561  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.385760    1499 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	W0204 18:21:25.579599  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: W0204 18:20:19.391056    1499 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-405803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-405803' and this object
	W0204 18:21:25.579633  305713 out.go:270]   Feb 04 18:20:19 addons-405803 kubelet[1499]: E0204 18:20:19.391096    1499 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-405803\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-405803' and this object" logger="UnhandledError"
	I0204 18:21:25.579667  305713 out.go:358] Setting ErrFile to fd 2...
	I0204 18:21:25.579697  305713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:21:25.598629  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:25.697811  305713 kapi.go:107] duration metric: took 1m45.512442852s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0204 18:21:25.699745  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:26.087754  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:26.189719  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:26.587481  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:26.692091  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:27.091187  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:27.199149  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:27.588003  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:27.688825  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:28.088552  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:28.188293  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:28.588309  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:28.689256  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:29.088654  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:29.187948  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:29.588105  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:29.689655  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:30.088237  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:30.188529  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:30.590279  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:30.692732  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:31.088520  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:31.188267  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:31.600730  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:31.689382  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:32.089728  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:32.191618  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:32.587291  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0204 18:21:32.689880  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:33.088415  305713 kapi.go:107] duration metric: took 1m49.004526004s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0204 18:21:33.094488  305713 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-405803 cluster.
	I0204 18:21:33.099599  305713 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0204 18:21:33.103363  305713 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0204 18:21:33.197092  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:33.689128  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:34.189053  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:34.688288  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:35.188648  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:35.590463  305713 system_pods.go:59] 18 kube-system pods found
	I0204 18:21:35.590499  305713 system_pods.go:61] "coredns-668d6bf9bc-wnpwg" [0a64ca50-820c-442b-a55e-f717c9e82228] Running
	I0204 18:21:35.590506  305713 system_pods.go:61] "csi-hostpath-attacher-0" [40a3c96c-f0dd-4713-8d6c-6ecd57009c92] Running
	I0204 18:21:35.590511  305713 system_pods.go:61] "csi-hostpath-resizer-0" [21816b47-6109-4cde-ab14-ec2713bce151] Running
	I0204 18:21:35.590520  305713 system_pods.go:61] "csi-hostpathplugin-tpqm8" [c9d13be9-b747-4191-89ba-ac1e368dccb2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0204 18:21:35.590526  305713 system_pods.go:61] "etcd-addons-405803" [d56711d7-d87a-41c1-82ac-322be033a1f9] Running
	I0204 18:21:35.590532  305713 system_pods.go:61] "kindnet-64zfx" [fcd7ccea-fa4e-4777-9ff1-006e6a289331] Running
	I0204 18:21:35.590537  305713 system_pods.go:61] "kube-apiserver-addons-405803" [a9146b3c-baa1-43d4-8745-e424d6546f90] Running
	I0204 18:21:35.590545  305713 system_pods.go:61] "kube-controller-manager-addons-405803" [84b9f775-2432-4774-a72b-1abf64f196ec] Running
	I0204 18:21:35.590550  305713 system_pods.go:61] "kube-ingress-dns-minikube" [f92e8d23-87fd-4e15-a177-9e94e9b45066] Running
	I0204 18:21:35.590559  305713 system_pods.go:61] "kube-proxy-kt9pn" [663ae19f-be2c-495a-b227-c4dc10ed7fe9] Running
	I0204 18:21:35.590563  305713 system_pods.go:61] "kube-scheduler-addons-405803" [3f73cfb3-d424-4b76-aedb-c7ce528f43ae] Running
	I0204 18:21:35.590567  305713 system_pods.go:61] "metrics-server-7fbb699795-nsn5c" [d52d4c27-1cdc-47f6-89b7-ac84dc713b0e] Running
	I0204 18:21:35.590571  305713 system_pods.go:61] "nvidia-device-plugin-daemonset-khhzw" [cb29110a-dc1d-4c8e-a151-cb776e2f36b1] Running
	I0204 18:21:35.590580  305713 system_pods.go:61] "registry-6c88467877-mdjf2" [1570cbcd-de9e-40d6-9b39-eeaa2ae29aa3] Running
	I0204 18:21:35.590584  305713 system_pods.go:61] "registry-proxy-wx964" [03b6e46f-0fbf-4f50-a587-01760afd7776] Running
	I0204 18:21:35.590588  305713 system_pods.go:61] "snapshot-controller-68b874b76f-46f42" [b725c93d-4d13-4d2a-abc9-109b5f239e34] Running
	I0204 18:21:35.590593  305713 system_pods.go:61] "snapshot-controller-68b874b76f-jnzjl" [2eba0b1b-2ed6-449f-a870-6795d58ab1da] Running
	I0204 18:21:35.590602  305713 system_pods.go:61] "storage-provisioner" [844cb879-115c-4702-9138-eafe338ac24e] Running
	I0204 18:21:35.590607  305713 system_pods.go:74] duration metric: took 11.66593756s to wait for pod list to return data ...
	I0204 18:21:35.590620  305713 default_sa.go:34] waiting for default service account to be created ...
	I0204 18:21:35.593592  305713 default_sa.go:45] found service account: "default"
	I0204 18:21:35.593621  305713 default_sa.go:55] duration metric: took 2.994571ms for default service account to be created ...
	I0204 18:21:35.593632  305713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0204 18:21:35.605050  305713 system_pods.go:86] 18 kube-system pods found
	I0204 18:21:35.605087  305713 system_pods.go:89] "coredns-668d6bf9bc-wnpwg" [0a64ca50-820c-442b-a55e-f717c9e82228] Running
	I0204 18:21:35.605095  305713 system_pods.go:89] "csi-hostpath-attacher-0" [40a3c96c-f0dd-4713-8d6c-6ecd57009c92] Running
	I0204 18:21:35.605101  305713 system_pods.go:89] "csi-hostpath-resizer-0" [21816b47-6109-4cde-ab14-ec2713bce151] Running
	I0204 18:21:35.605109  305713 system_pods.go:89] "csi-hostpathplugin-tpqm8" [c9d13be9-b747-4191-89ba-ac1e368dccb2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0204 18:21:35.605114  305713 system_pods.go:89] "etcd-addons-405803" [d56711d7-d87a-41c1-82ac-322be033a1f9] Running
	I0204 18:21:35.605152  305713 system_pods.go:89] "kindnet-64zfx" [fcd7ccea-fa4e-4777-9ff1-006e6a289331] Running
	I0204 18:21:35.605165  305713 system_pods.go:89] "kube-apiserver-addons-405803" [a9146b3c-baa1-43d4-8745-e424d6546f90] Running
	I0204 18:21:35.605170  305713 system_pods.go:89] "kube-controller-manager-addons-405803" [84b9f775-2432-4774-a72b-1abf64f196ec] Running
	I0204 18:21:35.605176  305713 system_pods.go:89] "kube-ingress-dns-minikube" [f92e8d23-87fd-4e15-a177-9e94e9b45066] Running
	I0204 18:21:35.605182  305713 system_pods.go:89] "kube-proxy-kt9pn" [663ae19f-be2c-495a-b227-c4dc10ed7fe9] Running
	I0204 18:21:35.605187  305713 system_pods.go:89] "kube-scheduler-addons-405803" [3f73cfb3-d424-4b76-aedb-c7ce528f43ae] Running
	I0204 18:21:35.605194  305713 system_pods.go:89] "metrics-server-7fbb699795-nsn5c" [d52d4c27-1cdc-47f6-89b7-ac84dc713b0e] Running
	I0204 18:21:35.605198  305713 system_pods.go:89] "nvidia-device-plugin-daemonset-khhzw" [cb29110a-dc1d-4c8e-a151-cb776e2f36b1] Running
	I0204 18:21:35.605202  305713 system_pods.go:89] "registry-6c88467877-mdjf2" [1570cbcd-de9e-40d6-9b39-eeaa2ae29aa3] Running
	I0204 18:21:35.605207  305713 system_pods.go:89] "registry-proxy-wx964" [03b6e46f-0fbf-4f50-a587-01760afd7776] Running
	I0204 18:21:35.605221  305713 system_pods.go:89] "snapshot-controller-68b874b76f-46f42" [b725c93d-4d13-4d2a-abc9-109b5f239e34] Running
	I0204 18:21:35.605226  305713 system_pods.go:89] "snapshot-controller-68b874b76f-jnzjl" [2eba0b1b-2ed6-449f-a870-6795d58ab1da] Running
	I0204 18:21:35.605230  305713 system_pods.go:89] "storage-provisioner" [844cb879-115c-4702-9138-eafe338ac24e] Running
	I0204 18:21:35.605237  305713 system_pods.go:126] duration metric: took 11.599015ms to wait for k8s-apps to be running ...
	I0204 18:21:35.605250  305713 system_svc.go:44] waiting for kubelet service to be running ....
	I0204 18:21:35.605306  305713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0204 18:21:35.623317  305713 system_svc.go:56] duration metric: took 18.055797ms WaitForService to wait for kubelet
	I0204 18:21:35.623347  305713 kubeadm.go:582] duration metric: took 2m2.996796146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0204 18:21:35.623366  305713 node_conditions.go:102] verifying NodePressure condition ...
	I0204 18:21:35.627946  305713 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0204 18:21:35.627982  305713 node_conditions.go:123] node cpu capacity is 2
	I0204 18:21:35.627996  305713 node_conditions.go:105] duration metric: took 4.623795ms to run NodePressure ...
	I0204 18:21:35.628010  305713 start.go:241] waiting for startup goroutines ...
	I0204 18:21:35.688553  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:36.188636  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:36.697349  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:37.190100  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:37.689440  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:38.189444  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:38.689441  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:39.188759  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:39.690012  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:40.188559  305713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0204 18:21:40.706096  305713 kapi.go:107] duration metric: took 2m0.02302666s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0204 18:21:40.709620  305713 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, ingress-dns, inspektor-gadget, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0204 18:21:40.713639  305713 addons.go:514] duration metric: took 2m8.086692343s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner ingress-dns inspektor-gadget storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0204 18:21:40.713742  305713 start.go:246] waiting for cluster config update ...
	I0204 18:21:40.713805  305713 start.go:255] writing updated cluster config ...
	I0204 18:21:40.714147  305713 ssh_runner.go:195] Run: rm -f paused
	I0204 18:21:41.125444  305713 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0204 18:21:41.128634  305713 out.go:177] * Done! kubectl is now configured to use "addons-405803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 04 18:24:28 addons-405803 crio[977]: time="2025-02-04 18:24:28.372644112Z" level=info msg="Removed pod sandbox: 38fba3ea29a5a7b4d71253c139228bdfae7327f54cb4cef2a3075b892de1e1ec" id=b5501e7c-0562-4e3c-9a9b-f55d58ba6b3c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.219750921Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-zsnkd/POD" id=4be3c360-c5e8-4846-9d3e-802173050a68 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.219836490Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.258325075Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-zsnkd Namespace:default ID:c51fc999fe899de121b566fccf45e990015ffe6a3c04a8d0b98c9fde1067f9db UID:d5b59b83-6351-4b09-90ce-610f4e7293c4 NetNS:/var/run/netns/330c4cc8-339c-480e-9e91-5ee2c7c357bd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.258366929Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-zsnkd to CNI network \"kindnet\" (type=ptp)"
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.275594181Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-zsnkd Namespace:default ID:c51fc999fe899de121b566fccf45e990015ffe6a3c04a8d0b98c9fde1067f9db UID:d5b59b83-6351-4b09-90ce-610f4e7293c4 NetNS:/var/run/netns/330c4cc8-339c-480e-9e91-5ee2c7c357bd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.275752274Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-zsnkd for CNI network kindnet (type=ptp)"
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.278736358Z" level=info msg="Ran pod sandbox c51fc999fe899de121b566fccf45e990015ffe6a3c04a8d0b98c9fde1067f9db with infra container: default/hello-world-app-7d9564db4-zsnkd/POD" id=4be3c360-c5e8-4846-9d3e-802173050a68 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.287560023Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d438b270-f8f7-45db-b0e1-80a02a6c0582 name=/runtime.v1.ImageService/ImageStatus
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.287810217Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d438b270-f8f7-45db-b0e1-80a02a6c0582 name=/runtime.v1.ImageService/ImageStatus
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.290512986Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=bdf0b1e6-3672-4501-ba61-94703ed516ca name=/runtime.v1.ImageService/PullImage
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.292962747Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 04 18:25:02 addons-405803 crio[977]: time="2025-02-04 18:25:02.600722770Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.437018325Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=bdf0b1e6-3672-4501-ba61-94703ed516ca name=/runtime.v1.ImageService/PullImage
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.437953255Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f32837c1-3ff7-481c-917b-ac791ad14680 name=/runtime.v1.ImageService/ImageStatus
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.438636342Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f32837c1-3ff7-481c-917b-ac791ad14680 name=/runtime.v1.ImageService/ImageStatus
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.441211270Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f85ec94a-fa37-4f00-bd02-3accb890af46 name=/runtime.v1.ImageService/ImageStatus
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.441911464Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f85ec94a-fa37-4f00-bd02-3accb890af46 name=/runtime.v1.ImageService/ImageStatus
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.442828548Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-zsnkd/hello-world-app" id=1c6c3926-1540-4b1e-acef-aced26229a51 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.442927434Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.466746177Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c9a807251a196a3171b91611fec937e993f1fbd5bc7fa3f2de5275e421a778bc/merged/etc/passwd: no such file or directory"
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.466789024Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c9a807251a196a3171b91611fec937e993f1fbd5bc7fa3f2de5275e421a778bc/merged/etc/group: no such file or directory"
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.530729499Z" level=info msg="Created container 736ea327dd1911273baee2f8618bde8d57dd07d71cca72778191c8763103f242: default/hello-world-app-7d9564db4-zsnkd/hello-world-app" id=1c6c3926-1540-4b1e-acef-aced26229a51 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.543304509Z" level=info msg="Starting container: 736ea327dd1911273baee2f8618bde8d57dd07d71cca72778191c8763103f242" id=5b4f7049-f885-4f7e-84be-c07d09c98c18 name=/runtime.v1.RuntimeService/StartContainer
	Feb 04 18:25:03 addons-405803 crio[977]: time="2025-02-04 18:25:03.564268363Z" level=info msg="Started container" PID=8578 containerID=736ea327dd1911273baee2f8618bde8d57dd07d71cca72778191c8763103f242 description=default/hello-world-app-7d9564db4-zsnkd/hello-world-app id=5b4f7049-f885-4f7e-84be-c07d09c98c18 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c51fc999fe899de121b566fccf45e990015ffe6a3c04a8d0b98c9fde1067f9db
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	736ea327dd191       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   c51fc999fe899       hello-world-app-7d9564db4-zsnkd
	f08d7f5c1875f       docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10                              2 minutes ago            Running             nginx                     0                   4b870942dc2fb       nginx
	8a0f329b66cf1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   507cfaf3ff6ae       busybox
	0e839e640a8ae       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             3 minutes ago            Running             controller                0                   256781fb9b881       ingress-nginx-controller-56d7c84fd4-5429j
	0b1b74b30ee51       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             3 minutes ago            Running             local-path-provisioner    0                   9e4a0345f8886       local-path-provisioner-76f89f99b5-smtrm
	d1c91528c72f4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   4 minutes ago            Exited              patch                     0                   23aa6ce9a8153       ingress-nginx-admission-patch-xbl2m
	21e3db5919230       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   4 minutes ago            Exited              create                    0                   de2d1177e6916       ingress-nginx-admission-create-kqk2w
	d2346b0063600       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             4 minutes ago            Running             minikube-ingress-dns      0                   ac3818a02b2d9       kube-ingress-dns-minikube
	7711137e84daa       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             4 minutes ago            Running             coredns                   0                   59f7c38729add       coredns-668d6bf9bc-wnpwg
	6ed46cb9c6bd7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago            Running             storage-provisioner       0                   23c077f5e395f       storage-provisioner
	a03859214bff6       docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be                           5 minutes ago            Running             kindnet-cni               0                   3f176eeb2016d       kindnet-64zfx
	859d905d3d831       e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0                                                             5 minutes ago            Running             kube-proxy                0                   36ff842ec4e92       kube-proxy-kt9pn
	82075c36c6654       265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19                                                             5 minutes ago            Running             kube-apiserver            0                   4ead9320d11ea       kube-apiserver-addons-405803
	83cf5e37db9a7       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13                                                             5 minutes ago            Running             kube-controller-manager   0                   135d65873cc32       kube-controller-manager-addons-405803
	4f64119ec6a5d       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             5 minutes ago            Running             etcd                      0                   7c4b877c88b2d       etcd-addons-405803
	817c5e94f20c8       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c                                                             5 minutes ago            Running             kube-scheduler            0                   b13941ee84a68       kube-scheduler-addons-405803
	
	
	==> coredns [7711137e84daa34cd0ec9b5027a6902acdf1a6a370f30551f51fa6505899bf7f] <==
	[INFO] 10.244.0.11:55985 - 60559 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003656038s
	[INFO] 10.244.0.11:55985 - 44158 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000229312s
	[INFO] 10.244.0.11:55985 - 40006 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000201842s
	[INFO] 10.244.0.11:48585 - 61675 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000183307s
	[INFO] 10.244.0.11:48585 - 61897 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096204s
	[INFO] 10.244.0.11:41530 - 6173 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000141027s
	[INFO] 10.244.0.11:41530 - 6601 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117339s
	[INFO] 10.244.0.11:59000 - 32948 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086202s
	[INFO] 10.244.0.11:59000 - 32767 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000857s
	[INFO] 10.244.0.11:49547 - 41309 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00162239s
	[INFO] 10.244.0.11:49547 - 41112 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001643813s
	[INFO] 10.244.0.11:43241 - 13254 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000109503s
	[INFO] 10.244.0.11:43241 - 12859 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076503s
	[INFO] 10.244.0.21:35229 - 34789 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000172812s
	[INFO] 10.244.0.21:36363 - 4333 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151406s
	[INFO] 10.244.0.21:57169 - 18172 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158725s
	[INFO] 10.244.0.21:34094 - 61101 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139296s
	[INFO] 10.244.0.21:46653 - 35462 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013329s
	[INFO] 10.244.0.21:58051 - 50894 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136244s
	[INFO] 10.244.0.21:56181 - 9279 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00216299s
	[INFO] 10.244.0.21:41373 - 42248 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001809874s
	[INFO] 10.244.0.21:36402 - 41301 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002338912s
	[INFO] 10.244.0.21:45595 - 53616 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002337681s
	[INFO] 10.244.0.24:46572 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00019893s
	[INFO] 10.244.0.24:38544 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00013827s
	
	
	==> describe nodes <==
	Name:               addons-405803
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-405803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2ad12868b53d667fdb2ff045ead964d3d2f95148
	                    minikube.k8s.io/name=addons-405803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_04T18_19_28_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-405803
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Feb 2025 18:19:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-405803
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Feb 2025 18:24:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Feb 2025 18:24:03 +0000   Tue, 04 Feb 2025 18:19:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Feb 2025 18:24:03 +0000   Tue, 04 Feb 2025 18:19:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Feb 2025 18:24:03 +0000   Tue, 04 Feb 2025 18:19:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Feb 2025 18:24:03 +0000   Tue, 04 Feb 2025 18:20:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-405803
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2958e9c562d645009517e32291d03715
	  System UUID:                530de01b-16ff-4b74-8dc6-243aec617c4b
	  Boot ID:                    adc8721a-623c-4f06-b023-679b8ba8ab86
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  default                     hello-world-app-7d9564db4-zsnkd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-5429j    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m23s
	  kube-system                 coredns-668d6bf9bc-wnpwg                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m31s
	  kube-system                 etcd-addons-405803                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m37s
	  kube-system                 kindnet-64zfx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m32s
	  kube-system                 kube-apiserver-addons-405803                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-addons-405803        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-kt9pn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-scheduler-addons-405803                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  local-path-storage          local-path-provisioner-76f89f99b5-smtrm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m24s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m42s (x8 over 5m42s)  kubelet          Node addons-405803 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m42s (x8 over 5m42s)  kubelet          Node addons-405803 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m42s (x8 over 5m42s)  kubelet          Node addons-405803 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m35s                  kubelet          Node addons-405803 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m35s                  kubelet          Node addons-405803 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m35s                  kubelet          Node addons-405803 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m32s                  node-controller  Node addons-405803 event: Registered Node addons-405803 in Controller
	  Normal   NodeReady                4m44s                  kubelet          Node addons-405803 status is now: NodeReady
	
	
	==> dmesg <==
	[Feb 4 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014338] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.482937] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032490] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.731226] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.169505] kauditd_printk_skb: 36 callbacks suppressed
	[Feb 4 17:47] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [4f64119ec6a5d17abb60728bbf74a86335aa084c64eaea70362f8fb51d84780f] <==
	{"level":"info","ts":"2025-02-04T18:19:22.599838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-04T18:19:22.604218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-04T18:19:22.604314Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-04T18:19:22.604432Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-04T18:19:22.604485Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-04T18:19:22.604543Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-04T18:19:22.604575Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-04T18:19:22.604821Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-04T18:19:22.605040Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-04T18:19:22.605763Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-04T18:19:22.616423Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-02-04T18:19:33.151337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.492325ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128035057969610356 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-668d6bf9bc.1821143a973d290e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-668d6bf9bc.1821143a973d290e\" value_size:622 lease:8128035057969609680 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-02-04T18:19:33.151442Z","caller":"traceutil/trace.go:171","msg":"trace[590286791] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"124.772693ms","start":"2025-02-04T18:19:33.026659Z","end":"2025-02-04T18:19:33.151431Z","steps":["trace[590286791] 'compare'  (duration: 120.379351ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-04T18:19:34.171817Z","caller":"traceutil/trace.go:171","msg":"trace[444852978] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"290.82213ms","start":"2025-02-04T18:19:33.880403Z","end":"2025-02-04T18:19:34.171225Z","steps":["trace[444852978] 'process raft request'  (duration: 135.484745ms)","trace[444852978] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/serviceaccounts/default/default; req_size:165; } (duration: 88.526948ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-04T18:19:34.238005Z","caller":"traceutil/trace.go:171","msg":"trace[1729672880] linearizableReadLoop","detail":"{readStateIndex:392; appliedIndex:391; }","duration":"193.800437ms","start":"2025-02-04T18:19:34.043972Z","end":"2025-02-04T18:19:34.237772Z","steps":["trace[1729672880] 'read index received'  (duration: 40.245µs)","trace[1729672880] 'applied index is now lower than readState.Index'  (duration: 130.054124ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-04T18:19:34.270607Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.614985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-kt9pn\" limit:1 ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2025-02-04T18:19:34.388019Z","caller":"traceutil/trace.go:171","msg":"trace[1786652015] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-kt9pn; range_end:; response_count:1; response_revision:382; }","duration":"344.026096ms","start":"2025-02-04T18:19:34.043966Z","end":"2025-02-04T18:19:34.387992Z","steps":["trace[1786652015] 'agreement among raft nodes before linearized reading'  (duration: 130.146389ms)","trace[1786652015] 'range keys from bolt db'  (duration: 96.438509ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-04T18:19:34.388136Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-04T18:19:34.043921Z","time spent":"344.178085ms","remote":"127.0.0.1:41404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":4857,"request content":"key:\"/registry/pods/kube-system/kube-proxy-kt9pn\" limit:1 "}
	{"level":"info","ts":"2025-02-04T18:19:34.577475Z","caller":"traceutil/trace.go:171","msg":"trace[2076089729] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"159.195642ms","start":"2025-02-04T18:19:34.418262Z","end":"2025-02-04T18:19:34.577458Z","steps":["trace[2076089729] 'process raft request'  (duration: 159.062837ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-04T18:19:36.626280Z","caller":"traceutil/trace.go:171","msg":"trace[1364144418] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"143.150289ms","start":"2025-02-04T18:19:36.483114Z","end":"2025-02-04T18:19:36.626265Z","steps":["trace[1364144418] 'process raft request'  (duration: 86.306565ms)","trace[1364144418] 'compare'  (duration: 56.406908ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-04T18:19:36.626825Z","caller":"traceutil/trace.go:171","msg":"trace[167695372] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"143.280387ms","start":"2025-02-04T18:19:36.483173Z","end":"2025-02-04T18:19:36.626453Z","steps":["trace[167695372] 'process raft request'  (duration: 142.775232ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-04T18:19:37.400140Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.261095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-04T18:19:37.400236Z","caller":"traceutil/trace.go:171","msg":"trace[1970732325] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:407; }","duration":"101.366774ms","start":"2025-02-04T18:19:37.298855Z","end":"2025-02-04T18:19:37.400222Z","steps":["trace[1970732325] 'agreement among raft nodes before linearized reading'  (duration: 70.509518ms)","trace[1970732325] 'range keys from in-memory index tree'  (duration: 30.736151ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-04T18:19:37.748796Z","caller":"traceutil/trace.go:171","msg":"trace[1165825476] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"138.596113ms","start":"2025-02-04T18:19:37.610172Z","end":"2025-02-04T18:19:37.748768Z","steps":["trace[1165825476] 'process raft request'  (duration: 138.116698ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-04T18:19:37.760304Z","caller":"traceutil/trace.go:171","msg":"trace[679928247] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"110.385019ms","start":"2025-02-04T18:19:37.649905Z","end":"2025-02-04T18:19:37.760290Z","steps":["trace[679928247] 'process raft request'  (duration: 110.134186ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:25:03 up  2:07,  0 users,  load average: 0.68, 1.63, 2.59
	Linux addons-405803 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a03859214bff603c38c7c0a0d28552572163519bc8e47112b505ec1ef8b7a8c0] <==
	I0204 18:22:59.049747       1 main.go:301] handling current node
	I0204 18:23:09.049322       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:23:09.049365       1 main.go:301] handling current node
	I0204 18:23:19.052134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:23:19.053908       1 main.go:301] handling current node
	I0204 18:23:29.049635       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:23:29.049778       1 main.go:301] handling current node
	I0204 18:23:39.050995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:23:39.051042       1 main.go:301] handling current node
	I0204 18:23:49.049366       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:23:49.049405       1 main.go:301] handling current node
	I0204 18:23:59.053429       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:23:59.053575       1 main.go:301] handling current node
	I0204 18:24:09.049388       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:24:09.049424       1 main.go:301] handling current node
	I0204 18:24:19.052659       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:24:19.052698       1 main.go:301] handling current node
	I0204 18:24:29.049390       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:24:29.049430       1 main.go:301] handling current node
	I0204 18:24:39.050322       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:24:39.050355       1 main.go:301] handling current node
	I0204 18:24:49.049360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:24:49.049395       1 main.go:301] handling current node
	I0204 18:24:59.049378       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0204 18:24:59.049426       1 main.go:301] handling current node
	
	
	==> kube-apiserver [82075c36c66546085e9e15426460a71b44617fc12f18f91da09fbdda75e7692e] <==
	 > logger="UnhandledError"
	I0204 18:20:28.222014       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0204 18:21:52.264103       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59154: use of closed network connection
	E0204 18:21:52.511435       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59178: use of closed network connection
	I0204 18:22:02.176857       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.178.245"}
	I0204 18:22:29.184588       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0204 18:22:32.459145       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0204 18:22:33.487531       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0204 18:22:38.064990       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0204 18:22:38.402290       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.99.189"}
	I0204 18:22:42.993014       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0204 18:23:08.473657       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0204 18:23:08.473718       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0204 18:23:08.489604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0204 18:23:08.490055       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0204 18:23:08.517342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0204 18:23:08.517756       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0204 18:23:08.625647       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0204 18:23:08.626263       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0204 18:23:08.657279       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0204 18:23:08.657413       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0204 18:23:09.627427       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0204 18:23:09.658260       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0204 18:23:09.666210       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0204 18:25:01.883514       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.46.252"}
	
	
	==> kube-controller-manager [83cf5e37db9a7c84b82367325d5eaffa1fef24a0bbbd4c297e58f9aee087cc3e] <==
	I0204 18:24:03.807356       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="addons-405803"
	W0204 18:24:11.644364       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0204 18:24:11.645438       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0204 18:24:11.646427       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0204 18:24:11.646464       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0204 18:24:25.638502       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0204 18:24:25.639737       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0204 18:24:25.640768       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0204 18:24:25.640804       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0204 18:24:31.832786       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0204 18:24:31.833914       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0204 18:24:31.834756       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0204 18:24:31.834789       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0204 18:24:45.213475       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0204 18:24:45.215768       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0204 18:24:45.217616       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0204 18:24:45.217710       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0204 18:25:01.635335       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="51.820807ms"
	I0204 18:25:01.641611       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="6.01753ms"
	I0204 18:25:01.641873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="42.576µs"
	I0204 18:25:01.664819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="157.756µs"
	W0204 18:25:03.374108       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0204 18:25:03.379327       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0204 18:25:03.381044       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0204 18:25:03.381139       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [859d905d3d831f871af7bc06f64a989aac2c85bc958f52c745fe73a535c90a13] <==
	I0204 18:19:38.676690       1 server_linux.go:66] "Using iptables proxy"
	I0204 18:19:39.285285       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0204 18:19:39.352291       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0204 18:19:39.785591       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0204 18:19:39.785724       1 server_linux.go:170] "Using iptables Proxier"
	I0204 18:19:39.790446       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0204 18:19:39.792542       1 server.go:497] "Version info" version="v1.32.1"
	I0204 18:19:39.792801       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0204 18:19:39.794179       1 config.go:199] "Starting service config controller"
	I0204 18:19:39.798179       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0204 18:19:39.798293       1 config.go:105] "Starting endpoint slice config controller"
	I0204 18:19:39.798333       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0204 18:19:39.800764       1 config.go:329] "Starting node config controller"
	I0204 18:19:39.800846       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0204 18:19:39.898528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0204 18:19:39.898578       1 shared_informer.go:320] Caches are synced for service config
	I0204 18:19:39.912702       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [817c5e94f20c88d549ab94bc6f88ad31fd5cdad56631e72cc8d18f406da984c0] <==
	W0204 18:19:25.165019       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0204 18:19:25.166707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0204 18:19:25.164405       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0204 18:19:25.166823       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0204 18:19:25.173148       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0204 18:19:25.173260       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0204 18:19:26.034236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0204 18:19:26.034282       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0204 18:19:26.063258       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0204 18:19:26.063300       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0204 18:19:26.097718       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0204 18:19:26.097776       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0204 18:19:26.133693       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0204 18:19:26.133735       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0204 18:19:26.150290       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0204 18:19:26.150335       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0204 18:19:26.160867       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0204 18:19:26.160909       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0204 18:19:26.167783       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0204 18:19:26.167836       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0204 18:19:26.317378       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0204 18:19:26.317524       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0204 18:19:26.352783       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0204 18:19:26.352909       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0204 18:19:29.137066       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 04 18:24:27 addons-405803 kubelet[1499]: E0204 18:24:27.958699    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3ec446842c4ed2310b34b8a01ca213b94d68ee6b2198083ad8a76c1ddc8ec664/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3ec446842c4ed2310b34b8a01ca213b94d68ee6b2198083ad8a76c1ddc8ec664/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:27 addons-405803 kubelet[1499]: E0204 18:24:27.959378    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b9d9e8f6fee6db1c6834f155ecdc6d490f6ffdf686441fc1a8564a224fdf2185/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b9d9e8f6fee6db1c6834f155ecdc6d490f6ffdf686441fc1a8564a224fdf2185/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:27 addons-405803 kubelet[1499]: E0204 18:24:27.978649    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3a11bc76dbd201353c86ab650ee4d4bef81101c9bb6e2b0d41f0158fa3dd72fd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3a11bc76dbd201353c86ab650ee4d4bef81101c9bb6e2b0d41f0158fa3dd72fd/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:27 addons-405803 kubelet[1499]: E0204 18:24:27.979020    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/dce16a385b8fda57bbc80d6e149d47bf54ee8a2af011efedace4cbe7892dcef3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/dce16a385b8fda57bbc80d6e149d47bf54ee8a2af011efedace4cbe7892dcef3/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:27 addons-405803 kubelet[1499]: E0204 18:24:27.980216    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3ec446842c4ed2310b34b8a01ca213b94d68ee6b2198083ad8a76c1ddc8ec664/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3ec446842c4ed2310b34b8a01ca213b94d68ee6b2198083ad8a76c1ddc8ec664/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:27 addons-405803 kubelet[1499]: E0204 18:24:27.981518    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/df26b5661c793685b71705d6d24b732e332633e2c804a43e7d1243ce84c9e193/diff" to get inode usage: stat /var/lib/containers/storage/overlay/df26b5661c793685b71705d6d24b732e332633e2c804a43e7d1243ce84c9e193/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:27 addons-405803 kubelet[1499]: E0204 18:24:27.987106    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/22088fc92feb79d0770d2d90f5eae743606150c8f9fe632bbfb72d7fe315d54b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/22088fc92feb79d0770d2d90f5eae743606150c8f9fe632bbfb72d7fe315d54b/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:27 addons-405803 kubelet[1499]: E0204 18:24:27.990829    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5182335482af92a4c909224d4939ce2eedcb492ce27dc35d0a945ad1629a12d1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5182335482af92a4c909224d4939ce2eedcb492ce27dc35d0a945ad1629a12d1/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:28 addons-405803 kubelet[1499]: E0204 18:24:28.011788    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738693468011517838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 04 18:24:28 addons-405803 kubelet[1499]: E0204 18:24:28.011829    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738693468011517838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 04 18:24:28 addons-405803 kubelet[1499]: I0204 18:24:28.251525    1499 scope.go:117] "RemoveContainer" containerID="5bf22c98805e10b65ef3efd5fe99c7a87931e723f1bb9ef1eb97f8e2df5d420d"
	Feb 04 18:24:28 addons-405803 kubelet[1499]: I0204 18:24:28.274014    1499 scope.go:117] "RemoveContainer" containerID="54cd6d34dcd5a8d494fc7305e74503defaf865ed2e2d59784658a20216be8f04"
	Feb 04 18:24:28 addons-405803 kubelet[1499]: I0204 18:24:28.295387    1499 scope.go:117] "RemoveContainer" containerID="9df40536819f4ede49b5ed3494abdc853c436c3311072876388a09daaa3d49d8"
	Feb 04 18:24:33 addons-405803 kubelet[1499]: E0204 18:24:33.625246    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5b58ecea8d48b7aee2941bd7af453327b1699efd41040f3ae33452a5e43afb58/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5b58ecea8d48b7aee2941bd7af453327b1699efd41040f3ae33452a5e43afb58/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:38 addons-405803 kubelet[1499]: E0204 18:24:38.013964    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738693478013718572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 04 18:24:38 addons-405803 kubelet[1499]: E0204 18:24:38.014002    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738693478013718572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 04 18:24:38 addons-405803 kubelet[1499]: E0204 18:24:38.895473    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7ebb8fbf7fc94b270815d42825261d0f8461e58fbdf5908b09e9b03f9d1ad917/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7ebb8fbf7fc94b270815d42825261d0f8461e58fbdf5908b09e9b03f9d1ad917/diff: no such file or directory, extraDiskErr: <nil>
	Feb 04 18:24:48 addons-405803 kubelet[1499]: E0204 18:24:48.017190    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738693488016855357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 04 18:24:48 addons-405803 kubelet[1499]: E0204 18:24:48.017237    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738693488016855357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 04 18:24:58 addons-405803 kubelet[1499]: E0204 18:24:58.020524    1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738693498020256848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 04 18:24:58 addons-405803 kubelet[1499]: E0204 18:24:58.020565    1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738693498020256848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605643,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 04 18:25:01 addons-405803 kubelet[1499]: I0204 18:25:01.616936    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="ffaee120-318e-4eb1-9e86-7a0c02869a20" containerName="helper-pod"
	Feb 04 18:25:01 addons-405803 kubelet[1499]: I0204 18:25:01.616975    1499 memory_manager.go:355] "RemoveStaleState removing state" podUID="0bcd3867-1027-4270-93c9-c900b133cbda" containerName="cloud-spanner-emulator"
	Feb 04 18:25:01 addons-405803 kubelet[1499]: I0204 18:25:01.801658    1499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8nz7\" (UniqueName: \"kubernetes.io/projected/d5b59b83-6351-4b09-90ce-610f4e7293c4-kube-api-access-s8nz7\") pod \"hello-world-app-7d9564db4-zsnkd\" (UID: \"d5b59b83-6351-4b09-90ce-610f4e7293c4\") " pod="default/hello-world-app-7d9564db4-zsnkd"
	Feb 04 18:25:02 addons-405803 kubelet[1499]: E0204 18:25:02.606021    1499 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0e7e5d694ce84c99ae084f9355e88a4b18f7ae74c780f5a9d4a7f444895db2bc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0e7e5d694ce84c99ae084f9355e88a4b18f7ae74c780f5a9d4a7f444895db2bc/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [6ed46cb9c6bd7ede189cc1cf59e9e97043311c5bdfcdf4912d28fde39123ae6c] <==
	I0204 18:20:19.911013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0204 18:20:19.930839       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0204 18:20:19.930887       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0204 18:20:19.956977       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0204 18:20:19.957887       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-405803_73147e30-8849-459d-9477-cd3437194231!
	I0204 18:20:19.959986       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18ccbc20-c7d6-4986-8cbf-e76259ff2cd3", APIVersion:"v1", ResourceVersion:"906", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-405803_73147e30-8849-459d-9477-cd3437194231 became leader
	I0204 18:20:20.058062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-405803_73147e30-8849-459d-9477-cd3437194231!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-405803 -n addons-405803
helpers_test.go:261: (dbg) Run:  kubectl --context addons-405803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-kqk2w ingress-nginx-admission-patch-xbl2m
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-405803 describe pod ingress-nginx-admission-create-kqk2w ingress-nginx-admission-patch-xbl2m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-405803 describe pod ingress-nginx-admission-create-kqk2w ingress-nginx-admission-patch-xbl2m: exit status 1 (142.567218ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kqk2w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xbl2m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-405803 describe pod ingress-nginx-admission-create-kqk2w ingress-nginx-admission-patch-xbl2m: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-405803 addons disable ingress-dns --alsologtostderr -v=1: (1.309171773s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-405803 addons disable ingress --alsologtostderr -v=1: (7.801515672s)
--- FAIL: TestAddons/parallel/Ingress (156.64s)

                                                
                                    

Test pass (298/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.94
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.1/json-events 8.21
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.08
18 TestDownloadOnly/v1.32.1/DeleteAll 0.22
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 179.54
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 10.94
35 TestAddons/parallel/Registry 17.74
37 TestAddons/parallel/InspektorGadget 11.85
38 TestAddons/parallel/MetricsServer 6.89
40 TestAddons/parallel/CSI 57.31
41 TestAddons/parallel/Headlamp 17.09
42 TestAddons/parallel/CloudSpanner 6.58
43 TestAddons/parallel/LocalPath 10.41
44 TestAddons/parallel/NvidiaDevicePlugin 5.52
45 TestAddons/parallel/Yakd 11.73
47 TestAddons/StoppedEnableDisable 12.2
48 TestCertOptions 38.34
49 TestCertExpiration 239.92
51 TestForceSystemdFlag 39.21
52 TestForceSystemdEnv 40.62
58 TestErrorSpam/setup 29.8
59 TestErrorSpam/start 0.84
60 TestErrorSpam/status 1.26
61 TestErrorSpam/pause 1.79
62 TestErrorSpam/unpause 1.81
63 TestErrorSpam/stop 1.53
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 51.52
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 28.97
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.69
75 TestFunctional/serial/CacheCmd/cache/add_local 1.44
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.19
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.19
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 37.45
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.74
86 TestFunctional/serial/LogsFileCmd 1.79
87 TestFunctional/serial/InvalidService 4.29
89 TestFunctional/parallel/ConfigCmd 0.53
90 TestFunctional/parallel/DashboardCmd 9.67
91 TestFunctional/parallel/DryRun 0.51
92 TestFunctional/parallel/InternationalLanguage 0.23
93 TestFunctional/parallel/StatusCmd 1.03
97 TestFunctional/parallel/ServiceCmdConnect 12.79
98 TestFunctional/parallel/AddonsCmd 0.29
99 TestFunctional/parallel/PersistentVolumeClaim 26.26
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.28
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 2.26
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
113 TestFunctional/parallel/License 0.31
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.46
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ProfileCmd/profile_list 0.6
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
129 TestFunctional/parallel/MountCmd/any-port 9.21
130 TestFunctional/parallel/ServiceCmd/List 0.6
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.4
135 TestFunctional/parallel/MountCmd/specific-port 1.6
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.97
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.37
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
144 TestFunctional/parallel/ImageCommands/Setup 0.81
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.67
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.07
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.85
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 184.76
163 TestMultiControlPlane/serial/DeployApp 8.76
164 TestMultiControlPlane/serial/PingHostFromPods 1.78
165 TestMultiControlPlane/serial/AddWorkerNode 36.36
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.99
168 TestMultiControlPlane/serial/CopyFile 19.2
169 TestMultiControlPlane/serial/StopSecondaryNode 12.79
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
171 TestMultiControlPlane/serial/RestartSecondaryNode 25.89
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.29
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 165.73
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.5
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 35.77
177 TestMultiControlPlane/serial/RestartCluster 110.85
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
179 TestMultiControlPlane/serial/AddSecondaryNode 75.43
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
184 TestJSONOutput/start/Command 55.83
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.77
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.68
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.9
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 39.53
210 TestKicCustomNetwork/use_default_bridge_network 37.04
211 TestKicExistingNetwork 29.89
212 TestKicCustomSubnet 32.92
213 TestKicStaticIP 38.18
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 67.08
218 TestMountStart/serial/StartWithMountFirst 7.56
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 9.16
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.68
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.21
225 TestMountStart/serial/RestartStopped 7.73
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 76.84
230 TestMultiNode/serial/DeployApp2Nodes 6.83
231 TestMultiNode/serial/PingHostFrom2Pods 1.06
232 TestMultiNode/serial/AddNode 31.43
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.67
235 TestMultiNode/serial/CopyFile 10.23
236 TestMultiNode/serial/StopNode 2.26
237 TestMultiNode/serial/StartAfterStop 10.66
238 TestMultiNode/serial/RestartKeepsNodes 86.97
239 TestMultiNode/serial/DeleteNode 5.25
240 TestMultiNode/serial/StopMultiNode 23.89
241 TestMultiNode/serial/RestartMultiNode 47.16
242 TestMultiNode/serial/ValidateNameConflict 33.91
247 TestPreload 131.42
249 TestScheduledStopUnix 107.76
252 TestInsufficientStorage 10.73
253 TestRunningBinaryUpgrade 74.64
255 TestKubernetesUpgrade 398.19
256 TestMissingContainerUpgrade 169.43
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
259 TestNoKubernetes/serial/StartWithK8s 38.52
260 TestNoKubernetes/serial/StartWithStopK8s 13.02
261 TestNoKubernetes/serial/Start 9.18
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
263 TestNoKubernetes/serial/ProfileList 1.09
264 TestNoKubernetes/serial/Stop 1.27
265 TestNoKubernetes/serial/StartNoArgs 8.25
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
267 TestStoppedBinaryUpgrade/Setup 1.45
268 TestStoppedBinaryUpgrade/Upgrade 85.4
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.51
278 TestPause/serial/Start 53.41
279 TestPause/serial/SecondStartNoReconfiguration 29.06
280 TestPause/serial/Pause 1.04
281 TestPause/serial/VerifyStatus 0.4
282 TestPause/serial/Unpause 0.81
283 TestPause/serial/PauseAgain 1.48
284 TestPause/serial/DeletePaused 3.4
285 TestPause/serial/VerifyDeletedResources 2.73
293 TestNetworkPlugins/group/false 5.63
298 TestStartStop/group/old-k8s-version/serial/FirstStart 174.94
300 TestStartStop/group/no-preload/serial/FirstStart 67.95
301 TestStartStop/group/old-k8s-version/serial/DeployApp 11.97
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.58
303 TestStartStop/group/old-k8s-version/serial/Stop 13.59
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
305 TestStartStop/group/old-k8s-version/serial/SecondStart 153.13
306 TestStartStop/group/no-preload/serial/DeployApp 9.4
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.64
308 TestStartStop/group/no-preload/serial/Stop 12.4
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
310 TestStartStop/group/no-preload/serial/SecondStart 266.74
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
314 TestStartStop/group/old-k8s-version/serial/Pause 3.06
316 TestStartStop/group/embed-certs/serial/FirstStart 48.42
317 TestStartStop/group/embed-certs/serial/DeployApp 11.35
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
319 TestStartStop/group/embed-certs/serial/Stop 11.95
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
321 TestStartStop/group/embed-certs/serial/SecondStart 272.5
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
325 TestStartStop/group/no-preload/serial/Pause 3.34
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.9
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.35
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.98
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 277.18
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
336 TestStartStop/group/embed-certs/serial/Pause 3.21
338 TestStartStop/group/newest-cni/serial/FirstStart 35.11
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.45
341 TestStartStop/group/newest-cni/serial/Stop 1.35
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
343 TestStartStop/group/newest-cni/serial/SecondStart 16.45
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
347 TestStartStop/group/newest-cni/serial/Pause 3.19
348 TestNetworkPlugins/group/auto/Start 54.97
349 TestNetworkPlugins/group/auto/KubeletFlags 0.33
350 TestNetworkPlugins/group/auto/NetCatPod 12.3
351 TestNetworkPlugins/group/auto/DNS 0.19
352 TestNetworkPlugins/group/auto/Localhost 0.17
353 TestNetworkPlugins/group/auto/HairPin 0.16
354 TestNetworkPlugins/group/kindnet/Start 55.85
355 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.15
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.52
359 TestNetworkPlugins/group/calico/Start 74.02
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
362 TestNetworkPlugins/group/kindnet/NetCatPod 12.41
363 TestNetworkPlugins/group/kindnet/DNS 0.28
364 TestNetworkPlugins/group/kindnet/Localhost 0.24
365 TestNetworkPlugins/group/kindnet/HairPin 0.24
366 TestNetworkPlugins/group/custom-flannel/Start 64.8
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.38
369 TestNetworkPlugins/group/calico/NetCatPod 13.47
370 TestNetworkPlugins/group/calico/DNS 0.36
371 TestNetworkPlugins/group/calico/Localhost 0.23
372 TestNetworkPlugins/group/calico/HairPin 0.27
373 TestNetworkPlugins/group/enable-default-cni/Start 45.56
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.36
376 TestNetworkPlugins/group/custom-flannel/DNS 0.28
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.31
381 TestNetworkPlugins/group/flannel/Start 64.6
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
385 TestNetworkPlugins/group/bridge/Start 50.29
386 TestNetworkPlugins/group/flannel/ControllerPod 6.01
387 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
388 TestNetworkPlugins/group/flannel/NetCatPod 13.38
389 TestNetworkPlugins/group/flannel/DNS 0.24
390 TestNetworkPlugins/group/flannel/Localhost 0.17
391 TestNetworkPlugins/group/flannel/HairPin 0.16
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
393 TestNetworkPlugins/group/bridge/NetCatPod 12.27
394 TestNetworkPlugins/group/bridge/DNS 0.25
395 TestNetworkPlugins/group/bridge/Localhost 0.23
396 TestNetworkPlugins/group/bridge/HairPin 0.27
x
+
TestDownloadOnly/v1.20.0/json-events (7.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-767281 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-767281 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.939936454s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0204 18:18:30.916104  304949 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0204 18:18:30.916217  304949 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20345-299426/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-767281
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-767281: exit status 85 (97.771577ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-767281 | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC |          |
	|         | -p download-only-767281        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/04 18:18:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0204 18:18:23.029130  304954 out.go:345] Setting OutFile to fd 1 ...
	I0204 18:18:23.029332  304954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:18:23.029359  304954 out.go:358] Setting ErrFile to fd 2...
	I0204 18:18:23.029379  304954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:18:23.029663  304954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	W0204 18:18:23.029825  304954 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20345-299426/.minikube/config/config.json: open /home/jenkins/minikube-integration/20345-299426/.minikube/config/config.json: no such file or directory
	I0204 18:18:23.030315  304954 out.go:352] Setting JSON to true
	I0204 18:18:23.031240  304954 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7252,"bootTime":1738685851,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0204 18:18:23.031373  304954 start.go:139] virtualization:  
	I0204 18:18:23.035712  304954 out.go:97] [download-only-767281] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0204 18:18:23.035892  304954 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20345-299426/.minikube/cache/preloaded-tarball: no such file or directory
	I0204 18:18:23.035938  304954 notify.go:220] Checking for updates...
	I0204 18:18:23.039069  304954 out.go:169] MINIKUBE_LOCATION=20345
	I0204 18:18:23.042093  304954 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0204 18:18:23.045171  304954 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	I0204 18:18:23.048167  304954 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	I0204 18:18:23.051130  304954 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0204 18:18:23.056730  304954 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0204 18:18:23.056975  304954 driver.go:394] Setting default libvirt URI to qemu:///system
	I0204 18:18:23.084321  304954 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0204 18:18:23.084451  304954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:18:23.152293  304954 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-04 18:18:23.14246263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:18:23.152446  304954 docker.go:318] overlay module found
	I0204 18:18:23.155495  304954 out.go:97] Using the docker driver based on user configuration
	I0204 18:18:23.155530  304954 start.go:297] selected driver: docker
	I0204 18:18:23.155538  304954 start.go:901] validating driver "docker" against <nil>
	I0204 18:18:23.155644  304954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:18:23.211060  304954 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-04 18:18:23.201674632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:18:23.211273  304954 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0204 18:18:23.211573  304954 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0204 18:18:23.211743  304954 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0204 18:18:23.214921  304954 out.go:169] Using Docker driver with root privileges
	I0204 18:18:23.217915  304954 cni.go:84] Creating CNI manager for ""
	I0204 18:18:23.217989  304954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0204 18:18:23.218008  304954 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0204 18:18:23.218098  304954 start.go:340] cluster config:
	{Name:download-only-767281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-767281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0204 18:18:23.221050  304954 out.go:97] Starting "download-only-767281" primary control-plane node in "download-only-767281" cluster
	I0204 18:18:23.221077  304954 cache.go:121] Beginning downloading kic base image for docker with crio
	I0204 18:18:23.223935  304954 out.go:97] Pulling base image v0.0.46 ...
	I0204 18:18:23.223976  304954 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0204 18:18:23.224072  304954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0204 18:18:23.239919  304954 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0204 18:18:23.240100  304954 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0204 18:18:23.240229  304954 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0204 18:18:23.287499  304954 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0204 18:18:23.287529  304954 cache.go:56] Caching tarball of preloaded images
	I0204 18:18:23.287692  304954 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0204 18:18:23.291040  304954 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0204 18:18:23.291074  304954 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0204 18:18:23.379040  304954 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20345-299426/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0204 18:18:27.807470  304954 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	
	
	* The control-plane node download-only-767281 host does not exist
	  To start a cluster, run: "minikube start -p download-only-767281"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-767281
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (8.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-305189 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-305189 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.21071725s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (8.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0204 18:18:39.600469  304949 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0204 18:18:39.600553  304949 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20345-299426/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-305189
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-305189: exit status 85 (80.821578ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-767281 | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC |                     |
	|         | -p download-only-767281        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC | 04 Feb 25 18:18 UTC |
	| delete  | -p download-only-767281        | download-only-767281 | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC | 04 Feb 25 18:18 UTC |
	| start   | -o=json --download-only        | download-only-305189 | jenkins | v1.35.0 | 04 Feb 25 18:18 UTC |                     |
	|         | -p download-only-305189        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/04 18:18:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0204 18:18:31.438338  305155 out.go:345] Setting OutFile to fd 1 ...
	I0204 18:18:31.438560  305155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:18:31.438589  305155 out.go:358] Setting ErrFile to fd 2...
	I0204 18:18:31.438608  305155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:18:31.438912  305155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	I0204 18:18:31.439344  305155 out.go:352] Setting JSON to true
	I0204 18:18:31.440279  305155 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7261,"bootTime":1738685851,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0204 18:18:31.440376  305155 start.go:139] virtualization:  
	I0204 18:18:31.443871  305155 out.go:97] [download-only-305189] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0204 18:18:31.444092  305155 notify.go:220] Checking for updates...
	I0204 18:18:31.446977  305155 out.go:169] MINIKUBE_LOCATION=20345
	I0204 18:18:31.449801  305155 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0204 18:18:31.452642  305155 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	I0204 18:18:31.455562  305155 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	I0204 18:18:31.458336  305155 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0204 18:18:31.463965  305155 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0204 18:18:31.464278  305155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0204 18:18:31.493797  305155 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0204 18:18:31.493908  305155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:18:31.554611  305155 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-04 18:18:31.538687091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:18:31.554726  305155 docker.go:318] overlay module found
	I0204 18:18:31.557780  305155 out.go:97] Using the docker driver based on user configuration
	I0204 18:18:31.557821  305155 start.go:297] selected driver: docker
	I0204 18:18:31.557828  305155 start.go:901] validating driver "docker" against <nil>
	I0204 18:18:31.557944  305155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:18:31.612498  305155 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-04 18:18:31.603840087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:18:31.612707  305155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0204 18:18:31.612982  305155 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0204 18:18:31.613133  305155 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0204 18:18:31.616301  305155 out.go:169] Using Docker driver with root privileges
	I0204 18:18:31.619176  305155 cni.go:84] Creating CNI manager for ""
	I0204 18:18:31.619240  305155 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0204 18:18:31.619255  305155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0204 18:18:31.619339  305155 start.go:340] cluster config:
	{Name:download-only-305189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-305189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0204 18:18:31.622391  305155 out.go:97] Starting "download-only-305189" primary control-plane node in "download-only-305189" cluster
	I0204 18:18:31.622412  305155 cache.go:121] Beginning downloading kic base image for docker with crio
	I0204 18:18:31.625281  305155 out.go:97] Pulling base image v0.0.46 ...
	I0204 18:18:31.625306  305155 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0204 18:18:31.625419  305155 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0204 18:18:31.641652  305155 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0204 18:18:31.641772  305155 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0204 18:18:31.641799  305155 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0204 18:18:31.641805  305155 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0204 18:18:31.641814  305155 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0204 18:18:31.697016  305155 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0204 18:18:31.697044  305155 cache.go:56] Caching tarball of preloaded images
	I0204 18:18:31.697198  305155 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0204 18:18:31.700368  305155 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0204 18:18:31.700389  305155 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 ...
	I0204 18:18:31.786825  305155 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:2975fc7b8b3f798b17cd470734f6f7e1 -> /home/jenkins/minikube-integration/20345-299426/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-305189 host does not exist
	  To start a cluster, run: "minikube start -p download-only-305189"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-305189
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0204 18:18:40.919756  304949 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-388273 --alsologtostderr --binary-mirror http://127.0.0.1:41211 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-388273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-388273
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-405803
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-405803: exit status 85 (73.743878ms)

                                                
                                                
-- stdout --
	* Profile "addons-405803" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-405803"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-405803
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-405803: exit status 85 (72.643397ms)

                                                
                                                
-- stdout --
	* Profile "addons-405803" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-405803"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (179.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-405803 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-405803 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m59.541703818s)
--- PASS: TestAddons/Setup (179.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-405803 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-405803 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.94s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-405803 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-405803 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cb875892-d93b-488f-bf72-c3438f8bfb54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cb875892-d93b-488f-bf72-c3438f8bfb54] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005048746s
addons_test.go:633: (dbg) Run:  kubectl --context addons-405803 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-405803 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-405803 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-405803 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 13.626574ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-mdjf2" [1570cbcd-de9e-40d6-9b39-eeaa2ae29aa3] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004025264s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wx964" [03b6e46f-0fbf-4f50-a587-01760afd7776] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004092852s
addons_test.go:331: (dbg) Run:  kubectl --context addons-405803 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-405803 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-405803 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.471248037s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 ip
2025/02/04 18:22:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fzdrt" [67e1c4bc-7fb4-4083-a991-38b4ff60bde4] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.007668848s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-405803 addons disable inspektor-gadget --alsologtostderr -v=1: (5.845774901s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.886074ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-nsn5c" [d52d4c27-1cdc-47f6-89b7-ac84dc713b0e] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003872123s
addons_test.go:402: (dbg) Run:  kubectl --context addons-405803 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0204 18:22:18.343025  304949 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0204 18:22:18.349469  304949 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0204 18:22:18.349511  304949 kapi.go:107] duration metric: took 10.507214ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.517996ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-405803 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-405803 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [20974a5a-7ad4-49ba-99b3-38b8fb3dd082] Pending
helpers_test.go:344: "task-pv-pod" [20974a5a-7ad4-49ba-99b3-38b8fb3dd082] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [20974a5a-7ad4-49ba-99b3-38b8fb3dd082] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.005674841s
addons_test.go:511: (dbg) Run:  kubectl --context addons-405803 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-405803 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-405803 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-405803 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-405803 delete pod task-pv-pod: (1.054881946s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-405803 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-405803 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-405803 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d76df8d4-4e60-4608-a46c-0dc2564647b4] Pending
helpers_test.go:344: "task-pv-pod-restore" [d76df8d4-4e60-4608-a46c-0dc2564647b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d76df8d4-4e60-4608-a46c-0dc2564647b4] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003720786s
addons_test.go:553: (dbg) Run:  kubectl --context addons-405803 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-405803 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-405803 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-405803 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.766414062s)
--- PASS: TestAddons/parallel/CSI (57.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-405803 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-405803 --alsologtostderr -v=1: (1.027173736s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-7bkjz" [ce1bf2b6-9c60-4db7-a3b0-eab80e4d251e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-7bkjz" [ce1bf2b6-9c60-4db7-a3b0-eab80e4d251e] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004584288s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-405803 addons disable headlamp --alsologtostderr -v=1: (6.059868687s)
--- PASS: TestAddons/parallel/Headlamp (17.09s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-74dc6" [0bcd3867-1027-4270-93c9-c900b133cbda] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00414629s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-405803 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-405803 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-405803 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1b42b2e1-7b24-402e-be13-d40437a3e09a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1b42b2e1-7b24-402e-be13-d40437a3e09a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1b42b2e1-7b24-402e-be13-d40437a3e09a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003673958s
addons_test.go:906: (dbg) Run:  kubectl --context addons-405803 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 ssh "cat /opt/local-path-provisioner/pvc-ad381d1e-0adf-4704-b4a1-94f012121e12_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-405803 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-405803 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-khhzw" [cb29110a-dc1d-4c8e-a151-cb776e2f36b1] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004598823s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-4fqlt" [a34b046e-c10c-408f-9696-12716a7b4910] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004156634s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-405803 addons disable yakd --alsologtostderr -v=1: (5.72557472s)
--- PASS: TestAddons/parallel/Yakd (11.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-405803
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-405803: (11.90836521s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-405803
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-405803
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-405803
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (38.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-460575 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-460575 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.605672766s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-460575 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-460575 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-460575 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-460575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-460575
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-460575: (2.05385422s)
--- PASS: TestCertOptions (38.34s)

                                                
                                    
x
+
TestCertExpiration (239.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-542211 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-542211 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.57562808s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-542211 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0204 19:08:32.754848  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-542211 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.835560292s)
helpers_test.go:175: Cleaning up "cert-expiration-542211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-542211
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-542211: (2.503782819s)
--- PASS: TestCertExpiration (239.92s)

                                                
                                    
x
+
TestForceSystemdFlag (39.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-936918 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-936918 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.892095966s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-936918 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-936918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-936918
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-936918: (2.944364992s)
--- PASS: TestForceSystemdFlag (39.21s)

                                                
                                    
x
+
TestForceSystemdEnv (40.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-065096 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-065096 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.979329509s)
helpers_test.go:175: Cleaning up "force-systemd-env-065096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-065096
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-065096: (2.635829266s)
--- PASS: TestForceSystemdEnv (40.62s)

                                                
                                    
x
+
TestErrorSpam/setup (29.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-742469 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-742469 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-742469 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-742469 --driver=docker  --container-runtime=crio: (29.794815187s)
--- PASS: TestErrorSpam/setup (29.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 status
--- PASS: TestErrorSpam/status (1.26s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 stop: (1.321436962s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-742469 --log_dir /tmp/nospam-742469 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20345-299426/.minikube/files/etc/test/nested/copy/304949/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-289833 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0204 18:26:42.038054  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:42.044457  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:42.055833  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:42.077216  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:42.118592  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:42.200005  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:42.361419  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:42.683062  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:43.325035  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:44.607282  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:47.168723  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:26:52.290745  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:27:02.532938  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-289833 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (51.514823518s)
--- PASS: TestFunctional/serial/StartWithProxy (51.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0204 18:27:07.148644  304949 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-289833 --alsologtostderr -v=8
E0204 18:27:23.015214  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-289833 --alsologtostderr -v=8: (28.97008826s)
functional_test.go:680: soft start took 28.973481616s for "functional-289833" cluster.
I0204 18:27:36.119059  304949 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (28.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-289833 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-289833 cache add registry.k8s.io/pause:3.1: (1.615182199s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-289833 cache add registry.k8s.io/pause:3.3: (1.661284784s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-289833 cache add registry.k8s.io/pause:latest: (1.414902681s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-289833 /tmp/TestFunctionalserialCacheCmdcacheadd_local186689370/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 cache add minikube-local-cache-test:functional-289833
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 cache delete minikube-local-cache-test:functional-289833
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-289833
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-289833 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.692092ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-289833 cache reload: (1.234469184s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 kubectl -- --context functional-289833 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-289833 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-289833 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0204 18:28:03.976652  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-289833 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.438476443s)
functional_test.go:778: restart took 37.445781994s for "functional-289833" cluster.
I0204 18:28:22.962501  304949 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (37.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-289833 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-289833 logs: (1.736114289s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 logs --file /tmp/TestFunctionalserialLogsFileCmd38636026/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-289833 logs --file /tmp/TestFunctionalserialLogsFileCmd38636026/001/logs.txt: (1.790794464s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-289833 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-289833
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-289833: exit status 115 (411.088524ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30318 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-289833 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-289833 config get cpus: exit status 14 (63.79281ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-289833 config get cpus: exit status 14 (102.173123ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-289833 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-289833 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 331760: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-289833 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-289833 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (214.240458ms)

                                                
                                                
-- stdout --
	* [functional-289833] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0204 18:29:05.476765  331454 out.go:345] Setting OutFile to fd 1 ...
	I0204 18:29:05.477194  331454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:29:05.477201  331454 out.go:358] Setting ErrFile to fd 2...
	I0204 18:29:05.477205  331454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:29:05.477792  331454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	I0204 18:29:05.479045  331454 out.go:352] Setting JSON to false
	I0204 18:29:05.480003  331454 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7895,"bootTime":1738685851,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0204 18:29:05.480219  331454 start.go:139] virtualization:  
	I0204 18:29:05.483508  331454 out.go:177] * [functional-289833] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0204 18:29:05.487384  331454 notify.go:220] Checking for updates...
	I0204 18:29:05.487902  331454 out.go:177]   - MINIKUBE_LOCATION=20345
	I0204 18:29:05.490770  331454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0204 18:29:05.493666  331454 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	I0204 18:29:05.496712  331454 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	I0204 18:29:05.500113  331454 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0204 18:29:05.503089  331454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0204 18:29:05.506858  331454 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0204 18:29:05.507400  331454 driver.go:394] Setting default libvirt URI to qemu:///system
	I0204 18:29:05.545940  331454 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0204 18:29:05.546070  331454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:29:05.616766  331454 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-04 18:29:05.592538386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:29:05.616882  331454 docker.go:318] overlay module found
	I0204 18:29:05.620350  331454 out.go:177] * Using the docker driver based on existing profile
	I0204 18:29:05.623082  331454 start.go:297] selected driver: docker
	I0204 18:29:05.623109  331454 start.go:901] validating driver "docker" against &{Name:functional-289833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-289833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0204 18:29:05.623229  331454 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0204 18:29:05.628686  331454 out.go:201] 
	W0204 18:29:05.631602  331454 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0204 18:29:05.634786  331454 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-289833 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-289833 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-289833 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (228.343987ms)

                                                
                                                
-- stdout --
	* [functional-289833] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0204 18:29:05.262596  331407 out.go:345] Setting OutFile to fd 1 ...
	I0204 18:29:05.262843  331407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:29:05.262875  331407 out.go:358] Setting ErrFile to fd 2...
	I0204 18:29:05.262899  331407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:29:05.263263  331407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	I0204 18:29:05.263687  331407 out.go:352] Setting JSON to false
	I0204 18:29:05.264790  331407 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7895,"bootTime":1738685851,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0204 18:29:05.264903  331407 start.go:139] virtualization:  
	I0204 18:29:05.268597  331407 out.go:177] * [functional-289833] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0204 18:29:05.271870  331407 notify.go:220] Checking for updates...
	I0204 18:29:05.271836  331407 out.go:177]   - MINIKUBE_LOCATION=20345
	I0204 18:29:05.275989  331407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0204 18:29:05.279006  331407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	I0204 18:29:05.281955  331407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	I0204 18:29:05.284701  331407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0204 18:29:05.287502  331407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0204 18:29:05.290716  331407 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0204 18:29:05.291237  331407 driver.go:394] Setting default libvirt URI to qemu:///system
	I0204 18:29:05.335870  331407 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0204 18:29:05.336092  331407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:29:05.405050  331407 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-04 18:29:05.395346533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:29:05.405170  331407 docker.go:318] overlay module found
	I0204 18:29:05.408226  331407 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0204 18:29:05.411146  331407 start.go:297] selected driver: docker
	I0204 18:29:05.411173  331407 start.go:901] validating driver "docker" against &{Name:functional-289833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-289833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0204 18:29:05.411294  331407 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0204 18:29:05.414819  331407 out.go:201] 
	W0204 18:29:05.417843  331407 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0204 18:29:05.421006  331407 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-289833 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-289833 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-wlgbt" [eac174fd-ea65-4222-a75c-dac7f584909e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-wlgbt" [eac174fd-ea65-4222-a75c-dac7f584909e] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.00305298s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32270
functional_test.go:1692: http://192.168.49.2:32270: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-wlgbt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32270
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4af316ad-4651-416d-839b-6b363b00f5c6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004406196s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-289833 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-289833 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-289833 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-289833 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d2c47928-3f48-49ee-abb9-220bba8f256c] Pending
helpers_test.go:344: "sp-pod" [d2c47928-3f48-49ee-abb9-220bba8f256c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d2c47928-3f48-49ee-abb9-220bba8f256c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004106062s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-289833 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-289833 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-289833 delete -f testdata/storage-provisioner/pod.yaml: (1.260392988s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-289833 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5e4696ed-c34d-4cb8-a199-48fd6484b3e3] Pending
helpers_test.go:344: "sp-pod" [5e4696ed-c34d-4cb8-a199-48fd6484b3e3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004227155s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-289833 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh -n functional-289833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 cp functional-289833:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd643733662/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh -n functional-289833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh -n functional-289833 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/304949/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo cat /etc/test/nested/copy/304949/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/304949.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo cat /etc/ssl/certs/304949.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/304949.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo cat /usr/share/ca-certificates/304949.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3049492.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo cat /etc/ssl/certs/3049492.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/3049492.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo cat /usr/share/ca-certificates/3049492.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-289833 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo systemctl is-active docker"
2025/02/04 18:29:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-289833 ssh "sudo systemctl is-active docker": exit status 1 (344.012715ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-289833 ssh "sudo systemctl is-active containerd": exit status 1 (338.839599ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-289833 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-289833 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-289833 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-289833 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 329375: os: process already finished
helpers_test.go:502: unable to terminate pid 329179: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-289833 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-289833 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b7f01121-531c-48aa-bf66-893f316746db] Pending
helpers_test.go:344: "nginx-svc" [b7f01121-531c-48aa-bf66-893f316746db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b7f01121-531c-48aa-bf66-893f316746db] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004121477s
I0204 18:28:41.212007  304949 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-289833 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.86.193 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-289833 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-289833 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-289833 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-zdk69" [495ff962-e313-4b0c-ac5d-c2400cf929e7] Pending
helpers_test.go:344: "hello-node-64fc58db8c-zdk69" [495ff962-e313-4b0c-ac5d-c2400cf929e7] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005263579s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "525.375597ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "70.881374ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "348.469704ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "60.995757ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdany-port3486206831/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1738693740892099376" to /tmp/TestFunctionalparallelMountCmdany-port3486206831/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1738693740892099376" to /tmp/TestFunctionalparallelMountCmdany-port3486206831/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1738693740892099376" to /tmp/TestFunctionalparallelMountCmdany-port3486206831/001/test-1738693740892099376
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-289833 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (355.197224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0204 18:29:01.247586  304949 retry.go:31] will retry after 273.780063ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  4 18:29 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  4 18:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  4 18:29 test-1738693740892099376
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh cat /mount-9p/test-1738693740892099376
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-289833 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [04a58bfe-f427-49b5-bb29-4a599584e71f] Pending
helpers_test.go:344: "busybox-mount" [04a58bfe-f427-49b5-bb29-4a599584e71f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [04a58bfe-f427-49b5-bb29-4a599584e71f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [04a58bfe-f427-49b5-bb29-4a599584e71f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005371859s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-289833 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdany-port3486206831/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 service list -o json
functional_test.go:1511: Took "594.546089ms" to run "out/minikube-linux-arm64 -p functional-289833 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:32764
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:32764
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdspecific-port3617342187/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdspecific-port3617342187/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-289833 ssh "sudo umount -f /mount-9p": exit status 1 (427.936897ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-289833 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdspecific-port3617342187/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388198709/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388198709/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388198709/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-289833 ssh "findmnt -T" /mount1: exit status 1 (1.025965184s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0204 18:29:12.736864  304949 retry.go:31] will retry after 718.968501ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-289833 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388198709/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388198709/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-289833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388198709/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-289833 version -o=json --components: (1.371442803s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-289833 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-289833
localhost/kicbase/echo-server:functional-289833
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-289833 image ls --format short --alsologtostderr:
I0204 18:29:23.138147  334265 out.go:345] Setting OutFile to fd 1 ...
I0204 18:29:23.138303  334265 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:23.138313  334265 out.go:358] Setting ErrFile to fd 2...
I0204 18:29:23.138319  334265 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:23.138715  334265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
I0204 18:29:23.139615  334265 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:23.139791  334265 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:23.141338  334265 cli_runner.go:164] Run: docker container inspect functional-289833 --format={{.State.Status}}
I0204 18:29:23.177717  334265 ssh_runner.go:195] Run: systemctl --version
I0204 18:29:23.177811  334265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-289833
I0204 18:29:23.202807  334265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/functional-289833/id_rsa Username:docker}
I0204 18:29:23.306177  334265 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-289833 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-289833  | cec0d521134b9 | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 2933761aa7ada | 88.2MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | e1181ee320546 | 99MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| docker.io/library/nginx                 | latest             | 0dff3f9967e3c | 201MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e124fbed851d7 | 98.3MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/kube-scheduler          | v1.32.1            | ddb38cac617cb | 69MB   |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/library/nginx                 | alpine             | f9d642c42f7bc | 52.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/kicbase/echo-server           | functional-289833  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 265c2dedf28ab | 95MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-289833 image ls --format table --alsologtostderr:
I0204 18:29:23.446860  334333 out.go:345] Setting OutFile to fd 1 ...
I0204 18:29:23.446963  334333 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:23.447023  334333 out.go:358] Setting ErrFile to fd 2...
I0204 18:29:23.447051  334333 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:23.447324  334333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
I0204 18:29:23.448028  334333 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:23.448208  334333 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:23.448746  334333 cli_runner.go:164] Run: docker container inspect functional-289833 --format={{.State.Status}}
I0204 18:29:23.468130  334333 ssh_runner.go:195] Run: systemctl --version
I0204 18:29:23.468197  334333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-289833
I0204 18:29:23.488394  334333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/functional-289833/id_rsa Username:docker}
I0204 18:29:23.580681  334333 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-289833 image ls --format json --alsologtostderr:
[{"id":"0dff3f9967e3cb3482965cc57c30e171f1def88e574757def5474cd791f50a16","repoDigests":["docker.io/library/nginx@sha256:b85b19a40a81f79c2a7855efc75fdc67a57e82db7bc94041a90763dcabc4a6c6","docker.io/library/nginx@sha256:bc2f6a7c8ddbccf55bdb19659ce3b0a92ca6559e86d42677a5a02ef6bda2fcef"],"repoTags":["docker.io/library/nginx:latest"],"size":"201125287"},{"id":"2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3","registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"88241478"},{"id":"ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1","registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f6
4be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"68973892"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"94991840"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registr
y.k8s.io/pause:latest"],"size":"246070"},{"id":"e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6","repoDigests":["docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be","docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"99018802"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399
310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"cec0d521134b9531271b8d9e85d3d4f911563a9995097cb37c60c7b2cef0b514","repoDigests":["localhost/minikube-local-cache-test@sha256:3418655e2e929508698a82103105752e8d97bd7b894f70b8ba711fabdaefe584"],"repoTags":["localhost/minikube-local-cache-test:functional-289833"],"size":"3330"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ce2d
2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-289833"],"size":"4788229"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"98313623"},{"id":"3d18732f8686cc3c878
055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10","docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52333544"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests"
:["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-289833 image ls --format json --alsologtostderr:
I0204 18:29:23.444876  334332 out.go:345] Setting OutFile to fd 1 ...
I0204 18:29:23.445080  334332 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:23.445106  334332 out.go:358] Setting ErrFile to fd 2...
I0204 18:29:23.445126  334332 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:23.445434  334332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
I0204 18:29:23.446207  334332 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:23.446388  334332 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:23.446944  334332 cli_runner.go:164] Run: docker container inspect functional-289833 --format={{.State.Status}}
I0204 18:29:23.467427  334332 ssh_runner.go:195] Run: systemctl --version
I0204 18:29:23.467483  334332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-289833
I0204 18:29:23.502867  334332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/functional-289833/id_rsa Username:docker}
I0204 18:29:23.597739  334332 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-289833 image ls --format yaml --alsologtostderr:
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 0dff3f9967e3cb3482965cc57c30e171f1def88e574757def5474cd791f50a16
repoDigests:
- docker.io/library/nginx@sha256:b85b19a40a81f79c2a7855efc75fdc67a57e82db7bc94041a90763dcabc4a6c6
- docker.io/library/nginx@sha256:bc2f6a7c8ddbccf55bdb19659ce3b0a92ca6559e86d42677a5a02ef6bda2fcef
repoTags:
- docker.io/library/nginx:latest
size: "201125287"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "98313623"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "94991840"
- id: 2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "88241478"
- id: ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "68973892"
- id: e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6
repoDigests:
- docker.io/kindest/kindnetd@sha256:564b9fd29e72542e4baa14b382d4d9ee22132141caa6b9803b71faf9a4a799be
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "99018802"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-289833
size: "4788229"
- id: cec0d521134b9531271b8d9e85d3d4f911563a9995097cb37c60c7b2cef0b514
repoDigests:
- localhost/minikube-local-cache-test@sha256:3418655e2e929508698a82103105752e8d97bd7b894f70b8ba711fabdaefe584
repoTags:
- localhost/minikube-local-cache-test:functional-289833
size: "3330"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "52333544"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-289833 image ls --format yaml --alsologtostderr:
I0204 18:29:23.150252  334266 out.go:345] Setting OutFile to fd 1 ...
I0204 18:29:23.150456  334266 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:23.150483  334266 out.go:358] Setting ErrFile to fd 2...
I0204 18:29:23.150502  334266 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:23.150959  334266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
I0204 18:29:23.151751  334266 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:23.151864  334266 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:23.152496  334266 cli_runner.go:164] Run: docker container inspect functional-289833 --format={{.State.Status}}
I0204 18:29:23.176048  334266 ssh_runner.go:195] Run: systemctl --version
I0204 18:29:23.176111  334266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-289833
I0204 18:29:23.212338  334266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/functional-289833/id_rsa Username:docker}
I0204 18:29:23.301679  334266 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-289833 ssh pgrep buildkitd: exit status 1 (303.66634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image build -t localhost/my-image:functional-289833 testdata/build --alsologtostderr
E0204 18:29:25.898584  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-289833 image build -t localhost/my-image:functional-289833 testdata/build --alsologtostderr: (2.933316379s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-arm64 -p functional-289833 image build -t localhost/my-image:functional-289833 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2fccd096032
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-289833
--> b9f2e3aecd7
Successfully tagged localhost/my-image:functional-289833
b9f2e3aecd78b6ed7a47b520e195494699657e0a0f2adcc0b2bf5af3d36df130
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-289833 image build -t localhost/my-image:functional-289833 testdata/build --alsologtostderr:
I0204 18:29:24.001317  334454 out.go:345] Setting OutFile to fd 1 ...
I0204 18:29:24.004653  334454 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:24.004729  334454 out.go:358] Setting ErrFile to fd 2...
I0204 18:29:24.004751  334454 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0204 18:29:24.005134  334454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
I0204 18:29:24.006061  334454 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:24.007661  334454 config.go:182] Loaded profile config "functional-289833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0204 18:29:24.008548  334454 cli_runner.go:164] Run: docker container inspect functional-289833 --format={{.State.Status}}
I0204 18:29:24.027061  334454 ssh_runner.go:195] Run: systemctl --version
I0204 18:29:24.027127  334454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-289833
I0204 18:29:24.047877  334454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/functional-289833/id_rsa Username:docker}
I0204 18:29:24.136871  334454 build_images.go:161] Building image from path: /tmp/build.761573820.tar
I0204 18:29:24.136956  334454 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0204 18:29:24.146368  334454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.761573820.tar
I0204 18:29:24.149902  334454 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.761573820.tar: stat -c "%s %y" /var/lib/minikube/build/build.761573820.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.761573820.tar': No such file or directory
I0204 18:29:24.149933  334454 ssh_runner.go:362] scp /tmp/build.761573820.tar --> /var/lib/minikube/build/build.761573820.tar (3072 bytes)
I0204 18:29:24.175717  334454 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.761573820
I0204 18:29:24.185034  334454 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.761573820 -xf /var/lib/minikube/build/build.761573820.tar
I0204 18:29:24.194355  334454 crio.go:315] Building image: /var/lib/minikube/build/build.761573820
I0204 18:29:24.194439  334454 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-289833 /var/lib/minikube/build/build.761573820 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0204 18:29:26.852036  334454 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-289833 /var/lib/minikube/build/build.761573820 --cgroup-manager=cgroupfs: (2.65756737s)
I0204 18:29:26.852110  334454 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.761573820
I0204 18:29:26.860900  334454 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.761573820.tar
I0204 18:29:26.869498  334454 build_images.go:217] Built localhost/my-image:functional-289833 from /tmp/build.761573820.tar
I0204 18:29:26.869529  334454 build_images.go:133] succeeded building to: functional-289833
I0204 18:29:26.869534  334454 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-289833
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image load --daemon kicbase/echo-server:functional-289833 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-arm64 -p functional-289833 image load --daemon kicbase/echo-server:functional-289833 --alsologtostderr: (1.343441174s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image load --daemon kicbase/echo-server:functional-289833 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-289833
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image load --daemon kicbase/echo-server:functional-289833 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image save kicbase/echo-server:functional-289833 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image rm kicbase/echo-server:functional-289833 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-289833
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-289833 image save --daemon kicbase/echo-server:functional-289833 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-289833
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-289833
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-289833
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-289833
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (184.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-161131 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0204 18:31:42.034805  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:32:09.740109  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-161131 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m3.928074974s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (184.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-161131 -- rollout status deployment/busybox: (5.588748092s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-8tdfs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-9xbqh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-qxcjh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-8tdfs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-9xbqh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-qxcjh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-8tdfs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-9xbqh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-qxcjh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-8tdfs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-8tdfs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-9xbqh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-9xbqh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-qxcjh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-161131 -- exec busybox-58667487b6-qxcjh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-161131 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-161131 -v=7 --alsologtostderr: (35.355372778s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr: (1.003908525s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-161131 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp testdata/cp-test.txt ha-161131:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile625428901/001/cp-test_ha-161131.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131:/home/docker/cp-test.txt ha-161131-m02:/home/docker/cp-test_ha-161131_ha-161131-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m02 "sudo cat /home/docker/cp-test_ha-161131_ha-161131-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131:/home/docker/cp-test.txt ha-161131-m03:/home/docker/cp-test_ha-161131_ha-161131-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m03 "sudo cat /home/docker/cp-test_ha-161131_ha-161131-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131:/home/docker/cp-test.txt ha-161131-m04:/home/docker/cp-test_ha-161131_ha-161131-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m04 "sudo cat /home/docker/cp-test_ha-161131_ha-161131-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp testdata/cp-test.txt ha-161131-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile625428901/001/cp-test_ha-161131-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m02:/home/docker/cp-test.txt ha-161131:/home/docker/cp-test_ha-161131-m02_ha-161131.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131 "sudo cat /home/docker/cp-test_ha-161131-m02_ha-161131.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m02:/home/docker/cp-test.txt ha-161131-m03:/home/docker/cp-test_ha-161131-m02_ha-161131-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m03 "sudo cat /home/docker/cp-test_ha-161131-m02_ha-161131-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m02:/home/docker/cp-test.txt ha-161131-m04:/home/docker/cp-test_ha-161131-m02_ha-161131-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m04 "sudo cat /home/docker/cp-test_ha-161131-m02_ha-161131-m04.txt"
E0204 18:33:32.755534  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:33:32.761881  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:33:32.773189  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:33:32.795179  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:33:32.837286  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp testdata/cp-test.txt ha-161131-m03:/home/docker/cp-test.txt
E0204 18:33:32.919124  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:33:33.080532  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m03 "sudo cat /home/docker/cp-test.txt"
E0204 18:33:33.402165  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile625428901/001/cp-test_ha-161131-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m03 "sudo cat /home/docker/cp-test.txt"
E0204 18:33:34.046672  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m03:/home/docker/cp-test.txt ha-161131:/home/docker/cp-test_ha-161131-m03_ha-161131.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131 "sudo cat /home/docker/cp-test_ha-161131-m03_ha-161131.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m03:/home/docker/cp-test.txt ha-161131-m02:/home/docker/cp-test_ha-161131-m03_ha-161131-m02.txt
E0204 18:33:35.329155  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m02 "sudo cat /home/docker/cp-test_ha-161131-m03_ha-161131-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m03:/home/docker/cp-test.txt ha-161131-m04:/home/docker/cp-test_ha-161131-m03_ha-161131-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m04 "sudo cat /home/docker/cp-test_ha-161131-m03_ha-161131-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp testdata/cp-test.txt ha-161131-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m04 "sudo cat /home/docker/cp-test.txt"
E0204 18:33:37.890388  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile625428901/001/cp-test_ha-161131-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m04:/home/docker/cp-test.txt ha-161131:/home/docker/cp-test_ha-161131-m04_ha-161131.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131 "sudo cat /home/docker/cp-test_ha-161131-m04_ha-161131.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m04:/home/docker/cp-test.txt ha-161131-m02:/home/docker/cp-test_ha-161131-m04_ha-161131-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m02 "sudo cat /home/docker/cp-test_ha-161131-m04_ha-161131-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 cp ha-161131-m04:/home/docker/cp-test.txt ha-161131-m03:/home/docker/cp-test_ha-161131-m04_ha-161131-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 ssh -n ha-161131-m03 "sudo cat /home/docker/cp-test_ha-161131-m04_ha-161131-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 node stop m02 -v=7 --alsologtostderr
E0204 18:33:43.013417  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:33:53.255597  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-161131 node stop m02 -v=7 --alsologtostderr: (12.033356959s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr: exit status 7 (756.436237ms)

                                                
                                                
-- stdout --
	ha-161131
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161131-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-161131-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161131-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0204 18:33:54.103705  350327 out.go:345] Setting OutFile to fd 1 ...
	I0204 18:33:54.103952  350327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:33:54.103979  350327 out.go:358] Setting ErrFile to fd 2...
	I0204 18:33:54.103996  350327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:33:54.104378  350327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	I0204 18:33:54.104609  350327 out.go:352] Setting JSON to false
	I0204 18:33:54.104677  350327 mustload.go:65] Loading cluster: ha-161131
	I0204 18:33:54.104780  350327 notify.go:220] Checking for updates...
	I0204 18:33:54.105211  350327 config.go:182] Loaded profile config "ha-161131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0204 18:33:54.105256  350327 status.go:174] checking status of ha-161131 ...
	I0204 18:33:54.105882  350327 cli_runner.go:164] Run: docker container inspect ha-161131 --format={{.State.Status}}
	I0204 18:33:54.128271  350327 status.go:371] ha-161131 host status = "Running" (err=<nil>)
	I0204 18:33:54.128295  350327 host.go:66] Checking if "ha-161131" exists ...
	I0204 18:33:54.128620  350327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-161131
	I0204 18:33:54.157881  350327 host.go:66] Checking if "ha-161131" exists ...
	I0204 18:33:54.158182  350327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0204 18:33:54.158240  350327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-161131
	I0204 18:33:54.177190  350327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33156 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/ha-161131/id_rsa Username:docker}
	I0204 18:33:54.269939  350327 ssh_runner.go:195] Run: systemctl --version
	I0204 18:33:54.274456  350327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0204 18:33:54.287175  350327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:33:54.347534  350327 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:73 SystemTime:2025-02-04 18:33:54.336713028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:33:54.348141  350327 kubeconfig.go:125] found "ha-161131" server: "https://192.168.49.254:8443"
	I0204 18:33:54.348217  350327 api_server.go:166] Checking apiserver status ...
	I0204 18:33:54.348261  350327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0204 18:33:54.359325  350327 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup
	I0204 18:33:54.369123  350327 api_server.go:182] apiserver freezer: "11:freezer:/docker/6ad34ce90f9a8593aad446fa9c271b6b537681d7dcbd322b00a39fe2c521d5ba/crio/crio-3311a3682e86ad589876bade6405b88694f3f70fcfc5024fc6ef3de65dc5a879"
	I0204 18:33:54.369193  350327 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6ad34ce90f9a8593aad446fa9c271b6b537681d7dcbd322b00a39fe2c521d5ba/crio/crio-3311a3682e86ad589876bade6405b88694f3f70fcfc5024fc6ef3de65dc5a879/freezer.state
	I0204 18:33:54.381759  350327 api_server.go:204] freezer state: "THAWED"
	I0204 18:33:54.381848  350327 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0204 18:33:54.392963  350327 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0204 18:33:54.392993  350327 status.go:463] ha-161131 apiserver status = Running (err=<nil>)
	I0204 18:33:54.393004  350327 status.go:176] ha-161131 status: &{Name:ha-161131 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0204 18:33:54.393019  350327 status.go:174] checking status of ha-161131-m02 ...
	I0204 18:33:54.393325  350327 cli_runner.go:164] Run: docker container inspect ha-161131-m02 --format={{.State.Status}}
	I0204 18:33:54.410853  350327 status.go:371] ha-161131-m02 host status = "Stopped" (err=<nil>)
	I0204 18:33:54.410879  350327 status.go:384] host is not running, skipping remaining checks
	I0204 18:33:54.410886  350327 status.go:176] ha-161131-m02 status: &{Name:ha-161131-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0204 18:33:54.410907  350327 status.go:174] checking status of ha-161131-m03 ...
	I0204 18:33:54.411216  350327 cli_runner.go:164] Run: docker container inspect ha-161131-m03 --format={{.State.Status}}
	I0204 18:33:54.434623  350327 status.go:371] ha-161131-m03 host status = "Running" (err=<nil>)
	I0204 18:33:54.434651  350327 host.go:66] Checking if "ha-161131-m03" exists ...
	I0204 18:33:54.434949  350327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-161131-m03
	I0204 18:33:54.459904  350327 host.go:66] Checking if "ha-161131-m03" exists ...
	I0204 18:33:54.460240  350327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0204 18:33:54.460291  350327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-161131-m03
	I0204 18:33:54.478929  350327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/ha-161131-m03/id_rsa Username:docker}
	I0204 18:33:54.577128  350327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0204 18:33:54.589342  350327 kubeconfig.go:125] found "ha-161131" server: "https://192.168.49.254:8443"
	I0204 18:33:54.589374  350327 api_server.go:166] Checking apiserver status ...
	I0204 18:33:54.589415  350327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0204 18:33:54.599954  350327 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1333/cgroup
	I0204 18:33:54.612774  350327 api_server.go:182] apiserver freezer: "11:freezer:/docker/00b5801879d0c2123e379d0a0e62ced40d86c7c74a25c1da2dcb8f050417ada4/crio/crio-fa2c7242eef9a9a8b3fb0a305f87c09bb3233c9f365c68d867b820797c7fe058"
	I0204 18:33:54.612996  350327 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/00b5801879d0c2123e379d0a0e62ced40d86c7c74a25c1da2dcb8f050417ada4/crio/crio-fa2c7242eef9a9a8b3fb0a305f87c09bb3233c9f365c68d867b820797c7fe058/freezer.state
	I0204 18:33:54.625900  350327 api_server.go:204] freezer state: "THAWED"
	I0204 18:33:54.625977  350327 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0204 18:33:54.634162  350327 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0204 18:33:54.634193  350327 status.go:463] ha-161131-m03 apiserver status = Running (err=<nil>)
	I0204 18:33:54.634203  350327 status.go:176] ha-161131-m03 status: &{Name:ha-161131-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0204 18:33:54.634219  350327 status.go:174] checking status of ha-161131-m04 ...
	I0204 18:33:54.634571  350327 cli_runner.go:164] Run: docker container inspect ha-161131-m04 --format={{.State.Status}}
	I0204 18:33:54.653284  350327 status.go:371] ha-161131-m04 host status = "Running" (err=<nil>)
	I0204 18:33:54.653311  350327 host.go:66] Checking if "ha-161131-m04" exists ...
	I0204 18:33:54.653637  350327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-161131-m04
	I0204 18:33:54.673785  350327 host.go:66] Checking if "ha-161131-m04" exists ...
	I0204 18:33:54.674095  350327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0204 18:33:54.674151  350327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-161131-m04
	I0204 18:33:54.696287  350327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/ha-161131-m04/id_rsa Username:docker}
	I0204 18:33:54.785585  350327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0204 18:33:54.801156  350327 status.go:176] ha-161131-m04 status: &{Name:ha-161131-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 node start m02 -v=7 --alsologtostderr
E0204 18:34:13.737968  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-161131 node start m02 -v=7 --alsologtostderr: (24.267132911s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr: (1.47171981s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.287615714s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-161131 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-161131 -v=7 --alsologtostderr
E0204 18:34:54.701135  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-161131 -v=7 --alsologtostderr: (36.945547638s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-161131 --wait=true -v=7 --alsologtostderr
E0204 18:36:16.623309  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:36:42.034601  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-161131 --wait=true -v=7 --alsologtostderr: (2m8.582180962s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-161131
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-161131 node delete m03 -v=7 --alsologtostderr: (11.546900644s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-161131 stop -v=7 --alsologtostderr: (35.65291128s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr: exit status 7 (114.880957ms)

                                                
                                                
-- stdout --
	ha-161131
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-161131-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-161131-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0204 18:37:57.429538  364445 out.go:345] Setting OutFile to fd 1 ...
	I0204 18:37:57.429730  364445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:37:57.429756  364445 out.go:358] Setting ErrFile to fd 2...
	I0204 18:37:57.429780  364445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:37:57.430142  364445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	I0204 18:37:57.430391  364445 out.go:352] Setting JSON to false
	I0204 18:37:57.430441  364445 mustload.go:65] Loading cluster: ha-161131
	I0204 18:37:57.431478  364445 notify.go:220] Checking for updates...
	I0204 18:37:57.431628  364445 config.go:182] Loaded profile config "ha-161131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0204 18:37:57.431693  364445 status.go:174] checking status of ha-161131 ...
	I0204 18:37:57.432414  364445 cli_runner.go:164] Run: docker container inspect ha-161131 --format={{.State.Status}}
	I0204 18:37:57.451189  364445 status.go:371] ha-161131 host status = "Stopped" (err=<nil>)
	I0204 18:37:57.451210  364445 status.go:384] host is not running, skipping remaining checks
	I0204 18:37:57.451217  364445 status.go:176] ha-161131 status: &{Name:ha-161131 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0204 18:37:57.451245  364445 status.go:174] checking status of ha-161131-m02 ...
	I0204 18:37:57.451544  364445 cli_runner.go:164] Run: docker container inspect ha-161131-m02 --format={{.State.Status}}
	I0204 18:37:57.474002  364445 status.go:371] ha-161131-m02 host status = "Stopped" (err=<nil>)
	I0204 18:37:57.474023  364445 status.go:384] host is not running, skipping remaining checks
	I0204 18:37:57.474030  364445 status.go:176] ha-161131-m02 status: &{Name:ha-161131-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0204 18:37:57.474050  364445 status.go:174] checking status of ha-161131-m04 ...
	I0204 18:37:57.474366  364445 cli_runner.go:164] Run: docker container inspect ha-161131-m04 --format={{.State.Status}}
	I0204 18:37:57.491819  364445 status.go:371] ha-161131-m04 host status = "Stopped" (err=<nil>)
	I0204 18:37:57.491845  364445 status.go:384] host is not running, skipping remaining checks
	I0204 18:37:57.491855  364445 status.go:176] ha-161131-m04 status: &{Name:ha-161131-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (110.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-161131 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0204 18:38:32.755127  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:39:00.464651  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-161131 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m49.91004941s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (110.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-161131 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-161131 --control-plane -v=7 --alsologtostderr: (1m14.44015848s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-161131 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.049282167s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-513475 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0204 18:41:42.034212  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-513475 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (55.825551822s)
--- PASS: TestJSONOutput/start/Command (55.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-513475 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-513475 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-513475 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-513475 --output=json --user=testUser: (5.900753338s)
--- PASS: TestJSONOutput/stop/Command (5.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-201702 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-201702 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.818325ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"14027d4e-0617-4e10-97c2-74429205da5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-201702] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a170ddd-a5ec-4ffb-85f6-1c1135a40a69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20345"}}
	{"specversion":"1.0","id":"ac4e2b32-a60c-40e3-9f2e-72748bfc4db6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"576702f4-d42d-4ba4-8dbe-7e70ea148088","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig"}}
	{"specversion":"1.0","id":"80b97738-e82d-43e3-985f-d269d438d1ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube"}}
	{"specversion":"1.0","id":"e8642dee-5a13-437b-b41f-77970bca2eb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"592166e3-cca3-4ce8-837d-78a9f280a7d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f4364b99-53e3-41eb-a2ff-2cc374a676dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-201702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-201702
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-016223 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-016223 --network=: (37.348629174s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-016223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-016223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-016223: (2.162298405s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-388063 --network=bridge
E0204 18:43:05.101508  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 18:43:32.754833  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-388063 --network=bridge: (35.028105557s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-388063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-388063
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-388063: (1.995360527s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.04s)

                                                
                                    
x
+
TestKicExistingNetwork (29.89s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0204 18:43:38.168906  304949 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0204 18:43:38.185608  304949 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0204 18:43:38.185691  304949 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0204 18:43:38.185710  304949 cli_runner.go:164] Run: docker network inspect existing-network
W0204 18:43:38.202016  304949 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0204 18:43:38.202049  304949 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0204 18:43:38.202067  304949 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0204 18:43:38.202252  304949 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0204 18:43:38.220261  304949 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4bff3d6d932c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c0:33:68:21} reservation:<nil>}
I0204 18:43:38.220679  304949 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ce3320}
I0204 18:43:38.220704  304949 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0204 18:43:38.220775  304949 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0204 18:43:38.291771  304949 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-794290 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-794290 --network=existing-network: (27.687660994s)
helpers_test.go:175: Cleaning up "existing-network-794290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-794290
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-794290: (2.047851191s)
I0204 18:44:08.046471  304949 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (29.89s)

                                                
                                    
x
+
TestKicCustomSubnet (32.92s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-853054 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-853054 --subnet=192.168.60.0/24: (30.765384087s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-853054 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-853054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-853054
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-853054: (2.120425147s)
--- PASS: TestKicCustomSubnet (32.92s)

                                                
                                    
x
+
TestKicStaticIP (38.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-564944 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-564944 --static-ip=192.168.200.200: (35.879034016s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-564944 ip
helpers_test.go:175: Cleaning up "static-ip-564944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-564944
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-564944: (2.144485442s)
--- PASS: TestKicStaticIP (38.18s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-326369 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-326369 --driver=docker  --container-runtime=crio: (30.083545586s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-328872 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-328872 --driver=docker  --container-runtime=crio: (31.217116311s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-326369
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-328872
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-328872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-328872
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-328872: (2.079336871s)
helpers_test.go:175: Cleaning up "first-326369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-326369
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-326369: (2.348887464s)
--- PASS: TestMinikubeProfile (67.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-281749 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-281749 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.558461623s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-281749 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-283858 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0204 18:46:42.034495  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-283858 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.157519226s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-283858 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-281749 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-281749 --alsologtostderr -v=5: (1.684398869s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-283858 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-283858
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-283858: (1.210806919s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-283858
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-283858: (6.733113564s)
--- PASS: TestMountStart/serial/RestartStopped (7.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-283858 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887774 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-887774 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m16.307666059s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-887774 -- rollout status deployment/busybox: (4.775575897s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-gcphk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-xbnm8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-gcphk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-xbnm8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-gcphk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-xbnm8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-gcphk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-gcphk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-xbnm8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-887774 -- exec busybox-58667487b6-xbnm8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-887774 -v 3 --alsologtostderr
E0204 18:48:32.755060  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-887774 -v 3 --alsologtostderr: (30.748209404s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-887774 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp testdata/cp-test.txt multinode-887774:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp multinode-887774:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3837735427/001/cp-test_multinode-887774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp multinode-887774:/home/docker/cp-test.txt multinode-887774-m02:/home/docker/cp-test_multinode-887774_multinode-887774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m02 "sudo cat /home/docker/cp-test_multinode-887774_multinode-887774-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp multinode-887774:/home/docker/cp-test.txt multinode-887774-m03:/home/docker/cp-test_multinode-887774_multinode-887774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m03 "sudo cat /home/docker/cp-test_multinode-887774_multinode-887774-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp testdata/cp-test.txt multinode-887774-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp multinode-887774-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3837735427/001/cp-test_multinode-887774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp multinode-887774-m02:/home/docker/cp-test.txt multinode-887774:/home/docker/cp-test_multinode-887774-m02_multinode-887774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774 "sudo cat /home/docker/cp-test_multinode-887774-m02_multinode-887774.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp multinode-887774-m02:/home/docker/cp-test.txt multinode-887774-m03:/home/docker/cp-test_multinode-887774-m02_multinode-887774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m03 "sudo cat /home/docker/cp-test_multinode-887774-m02_multinode-887774-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp testdata/cp-test.txt multinode-887774-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp multinode-887774-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3837735427/001/cp-test_multinode-887774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp multinode-887774-m03:/home/docker/cp-test.txt multinode-887774:/home/docker/cp-test_multinode-887774-m03_multinode-887774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774 "sudo cat /home/docker/cp-test_multinode-887774-m03_multinode-887774.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 cp multinode-887774-m03:/home/docker/cp-test.txt multinode-887774-m02:/home/docker/cp-test_multinode-887774-m03_multinode-887774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 ssh -n multinode-887774-m02 "sudo cat /home/docker/cp-test_multinode-887774-m03_multinode-887774-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-887774 node stop m03: (1.208697711s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-887774 status: exit status 7 (538.965332ms)

                                                
                                                
-- stdout --
	multinode-887774
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-887774-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-887774-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-887774 status --alsologtostderr: exit status 7 (509.328334ms)

                                                
                                                
-- stdout --
	multinode-887774
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-887774-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-887774-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0204 18:49:05.490770  418170 out.go:345] Setting OutFile to fd 1 ...
	I0204 18:49:05.490959  418170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:49:05.490989  418170 out.go:358] Setting ErrFile to fd 2...
	I0204 18:49:05.491012  418170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:49:05.491281  418170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	I0204 18:49:05.491489  418170 out.go:352] Setting JSON to false
	I0204 18:49:05.491556  418170 mustload.go:65] Loading cluster: multinode-887774
	I0204 18:49:05.491640  418170 notify.go:220] Checking for updates...
	I0204 18:49:05.492071  418170 config.go:182] Loaded profile config "multinode-887774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0204 18:49:05.492121  418170 status.go:174] checking status of multinode-887774 ...
	I0204 18:49:05.492705  418170 cli_runner.go:164] Run: docker container inspect multinode-887774 --format={{.State.Status}}
	I0204 18:49:05.510721  418170 status.go:371] multinode-887774 host status = "Running" (err=<nil>)
	I0204 18:49:05.510745  418170 host.go:66] Checking if "multinode-887774" exists ...
	I0204 18:49:05.511043  418170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-887774
	I0204 18:49:05.533923  418170 host.go:66] Checking if "multinode-887774" exists ...
	I0204 18:49:05.534225  418170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0204 18:49:05.534271  418170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-887774
	I0204 18:49:05.553391  418170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/multinode-887774/id_rsa Username:docker}
	I0204 18:49:05.641832  418170 ssh_runner.go:195] Run: systemctl --version
	I0204 18:49:05.646420  418170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0204 18:49:05.659942  418170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 18:49:05.716684  418170 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2025-02-04 18:49:05.706656035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 18:49:05.717310  418170 kubeconfig.go:125] found "multinode-887774" server: "https://192.168.67.2:8443"
	I0204 18:49:05.717345  418170 api_server.go:166] Checking apiserver status ...
	I0204 18:49:05.717405  418170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0204 18:49:05.728875  418170 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup
	I0204 18:49:05.738777  418170 api_server.go:182] apiserver freezer: "11:freezer:/docker/77bc6018250a5262d4d57574c2f900cc525ac7aa014fb9a5fdb5f8fefc761fd2/crio/crio-4bd74ff5b813e34f9e6d0a707470869dedbfdf7f1f5b83ff5d52e906e1af0398"
	I0204 18:49:05.738918  418170 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/77bc6018250a5262d4d57574c2f900cc525ac7aa014fb9a5fdb5f8fefc761fd2/crio/crio-4bd74ff5b813e34f9e6d0a707470869dedbfdf7f1f5b83ff5d52e906e1af0398/freezer.state
	I0204 18:49:05.748102  418170 api_server.go:204] freezer state: "THAWED"
	I0204 18:49:05.748144  418170 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0204 18:49:05.757123  418170 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0204 18:49:05.757153  418170 status.go:463] multinode-887774 apiserver status = Running (err=<nil>)
	I0204 18:49:05.757163  418170 status.go:176] multinode-887774 status: &{Name:multinode-887774 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0204 18:49:05.757181  418170 status.go:174] checking status of multinode-887774-m02 ...
	I0204 18:49:05.757496  418170 cli_runner.go:164] Run: docker container inspect multinode-887774-m02 --format={{.State.Status}}
	I0204 18:49:05.776297  418170 status.go:371] multinode-887774-m02 host status = "Running" (err=<nil>)
	I0204 18:49:05.776322  418170 host.go:66] Checking if "multinode-887774-m02" exists ...
	I0204 18:49:05.776628  418170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-887774-m02
	I0204 18:49:05.795720  418170 host.go:66] Checking if "multinode-887774-m02" exists ...
	I0204 18:49:05.796021  418170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0204 18:49:05.796060  418170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-887774-m02
	I0204 18:49:05.814948  418170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33281 SSHKeyPath:/home/jenkins/minikube-integration/20345-299426/.minikube/machines/multinode-887774-m02/id_rsa Username:docker}
	I0204 18:49:05.901746  418170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0204 18:49:05.914519  418170 status.go:176] multinode-887774-m02 status: &{Name:multinode-887774-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0204 18:49:05.914557  418170 status.go:174] checking status of multinode-887774-m03 ...
	I0204 18:49:05.914857  418170 cli_runner.go:164] Run: docker container inspect multinode-887774-m03 --format={{.State.Status}}
	I0204 18:49:05.933062  418170 status.go:371] multinode-887774-m03 host status = "Stopped" (err=<nil>)
	I0204 18:49:05.933089  418170 status.go:384] host is not running, skipping remaining checks
	I0204 18:49:05.933097  418170 status.go:176] multinode-887774-m03 status: &{Name:multinode-887774-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-887774 node start m03 -v=7 --alsologtostderr: (9.893832204s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-887774
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-887774
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-887774: (24.783172577s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887774 --wait=true -v=8 --alsologtostderr
E0204 18:49:55.826847  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-887774 --wait=true -v=8 --alsologtostderr: (1m2.06035924s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-887774
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-887774 node delete m03: (4.608198932s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-887774 stop: (23.687384682s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-887774 status: exit status 7 (100.258566ms)

                                                
                                                
-- stdout --
	multinode-887774
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-887774-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-887774 status --alsologtostderr: exit status 7 (97.638691ms)

                                                
                                                
-- stdout --
	multinode-887774
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-887774-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0204 18:51:12.665782  425591 out.go:345] Setting OutFile to fd 1 ...
	I0204 18:51:12.665918  425591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:51:12.665928  425591 out.go:358] Setting ErrFile to fd 2...
	I0204 18:51:12.665934  425591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 18:51:12.666188  425591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	I0204 18:51:12.666366  425591 out.go:352] Setting JSON to false
	I0204 18:51:12.666406  425591 mustload.go:65] Loading cluster: multinode-887774
	I0204 18:51:12.666490  425591 notify.go:220] Checking for updates...
	I0204 18:51:12.667811  425591 config.go:182] Loaded profile config "multinode-887774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0204 18:51:12.667848  425591 status.go:174] checking status of multinode-887774 ...
	I0204 18:51:12.668606  425591 cli_runner.go:164] Run: docker container inspect multinode-887774 --format={{.State.Status}}
	I0204 18:51:12.686981  425591 status.go:371] multinode-887774 host status = "Stopped" (err=<nil>)
	I0204 18:51:12.687004  425591 status.go:384] host is not running, skipping remaining checks
	I0204 18:51:12.687011  425591 status.go:176] multinode-887774 status: &{Name:multinode-887774 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0204 18:51:12.687036  425591 status.go:174] checking status of multinode-887774-m02 ...
	I0204 18:51:12.687343  425591 cli_runner.go:164] Run: docker container inspect multinode-887774-m02 --format={{.State.Status}}
	I0204 18:51:12.709837  425591 status.go:371] multinode-887774-m02 host status = "Stopped" (err=<nil>)
	I0204 18:51:12.709861  425591 status.go:384] host is not running, skipping remaining checks
	I0204 18:51:12.709867  425591 status.go:176] multinode-887774-m02 status: &{Name:multinode-887774-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887774 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0204 18:51:42.034670  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-887774 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (46.509893774s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-887774 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-887774
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887774-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-887774-m02 --driver=docker  --container-runtime=crio: exit status 14 (201.692424ms)

                                                
                                                
-- stdout --
	* [multinode-887774-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-887774-m02' is duplicated with machine name 'multinode-887774-m02' in profile 'multinode-887774'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-887774-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-887774-m03 --driver=docker  --container-runtime=crio: (31.320775683s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-887774
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-887774: exit status 80 (328.343308ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-887774 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-887774-m03 already exists in multinode-887774-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-887774-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-887774-m03: (2.001764807s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.91s)

                                                
                                    
x
+
TestPreload (131.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-249760 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0204 18:53:32.755249  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-249760 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.739421755s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-249760 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-249760 image pull gcr.io/k8s-minikube/busybox: (3.474856177s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-249760
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-249760: (5.745880393s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-249760 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-249760 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (22.795784438s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-249760 image list
helpers_test.go:175: Cleaning up "test-preload-249760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-249760
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-249760: (2.378411926s)
--- PASS: TestPreload (131.42s)

                                                
                                    
x
+
TestScheduledStopUnix (107.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-139933 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-139933 --memory=2048 --driver=docker  --container-runtime=crio: (31.609225023s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-139933 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-139933 -n scheduled-stop-139933
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-139933 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0204 18:55:21.502000  304949 retry.go:31] will retry after 94.281µs: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.502161  304949 retry.go:31] will retry after 108.638µs: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.503693  304949 retry.go:31] will retry after 186.606µs: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.504868  304949 retry.go:31] will retry after 427.805µs: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.506111  304949 retry.go:31] will retry after 521.618µs: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.507223  304949 retry.go:31] will retry after 999.887µs: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.508391  304949 retry.go:31] will retry after 1.645814ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.510609  304949 retry.go:31] will retry after 1.387054ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.512866  304949 retry.go:31] will retry after 3.380669ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.518485  304949 retry.go:31] will retry after 2.124248ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.521112  304949 retry.go:31] will retry after 4.109762ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.526377  304949 retry.go:31] will retry after 5.123507ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.531669  304949 retry.go:31] will retry after 15.772942ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.548062  304949 retry.go:31] will retry after 28.294859ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.577490  304949 retry.go:31] will retry after 26.767591ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
I0204 18:55:21.605049  304949 retry.go:31] will retry after 64.483956ms: open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/scheduled-stop-139933/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-139933 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-139933 -n scheduled-stop-139933
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-139933
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-139933 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-139933
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-139933: exit status 7 (77.568151ms)

                                                
                                                
-- stdout --
	scheduled-stop-139933
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-139933 -n scheduled-stop-139933
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-139933 -n scheduled-stop-139933: exit status 7 (69.206998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-139933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-139933
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-139933: (4.492916063s)
--- PASS: TestScheduledStopUnix (107.76s)

                                                
                                    
x
+
TestInsufficientStorage (10.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-807956 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E0204 18:56:42.034638  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-807956 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.222185429s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"44ec26eb-8444-4505-a481-f0153a4d8f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-807956] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"72b9fa67-5ced-4a0b-967d-b926e3ba0ad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20345"}}
	{"specversion":"1.0","id":"fea491a0-f4c7-4365-94f0-08e030bd7108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b43e2c2f-657a-4be6-ae34-e2896dc81f14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig"}}
	{"specversion":"1.0","id":"08128f6d-f30a-464b-8f4e-76f95d5ed78b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube"}}
	{"specversion":"1.0","id":"d3bde4e9-9bb6-4eb6-a81b-f2f9a1cb0728","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c0c529d8-192f-4bc8-9d72-1721e68ce61a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"483e9776-4fb5-4ae1-ab33-1d2e274a0bb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"145898da-0d3c-476a-b789-b14cc6134cbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3a5f8d12-f5a1-4e30-88c5-a5181ff43e33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"14a4d781-70f9-4a39-a3d3-888eacb9638d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"83952de0-84af-4f73-a689-a7cf1d851d37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-807956\" primary control-plane node in \"insufficient-storage-807956\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d61fc41-166b-416a-8e72-aa7a6210f5ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b5b94c4-8faf-45c2-bf03-d9790b2bab7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"30853974-5b1b-412e-b983-f8721f59a2dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-807956 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-807956 --output=json --layout=cluster: exit status 7 (285.951839ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-807956","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-807956","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0204 18:56:45.600759  443440 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-807956" does not appear in /home/jenkins/minikube-integration/20345-299426/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-807956 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-807956 --output=json --layout=cluster: exit status 7 (279.505698ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-807956","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-807956","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0204 18:56:45.880926  443503 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-807956" does not appear in /home/jenkins/minikube-integration/20345-299426/kubeconfig
	E0204 18:56:45.891310  443503 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/insufficient-storage-807956/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-807956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-807956
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-807956: (1.939001737s)
--- PASS: TestInsufficientStorage (10.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3998957740 start -p running-upgrade-148522 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0204 19:01:42.034248  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3998957740 start -p running-upgrade-148522 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.584425499s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-148522 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-148522 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.149625628s)
helpers_test.go:175: Cleaning up "running-upgrade-148522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-148522
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-148522: (2.848594334s)
--- PASS: TestRunningBinaryUpgrade (74.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (398.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-163052 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-163052 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.995178877s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-163052
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-163052: (1.905432549s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-163052 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-163052 status --format={{.Host}}: exit status 7 (109.014452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-163052 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-163052 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m44.22617715s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-163052 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-163052 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-163052 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (135.322521ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-163052] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-163052
	    minikube start -p kubernetes-upgrade-163052 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1630522 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-163052 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-163052 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-163052 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.296201633s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-163052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-163052
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-163052: (2.392383308s)
--- PASS: TestKubernetesUpgrade (398.19s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1210522231 start -p missing-upgrade-368513 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1210522231 start -p missing-upgrade-368513 --memory=2200 --driver=docker  --container-runtime=crio: (1m29.87845673s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-368513
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-368513: (10.488171953s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-368513
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-368513 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0204 18:58:32.754978  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-368513 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.296304665s)
helpers_test.go:175: Cleaning up "missing-upgrade-368513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-368513
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-368513: (2.358801069s)
--- PASS: TestMissingContainerUpgrade (169.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-761711 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-761711 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (101.892364ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-761711] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-761711 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-761711 --driver=docker  --container-runtime=crio: (38.150526357s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-761711 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-761711 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-761711 --no-kubernetes --driver=docker  --container-runtime=crio: (10.694382461s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-761711 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-761711 status -o json: exit status 2 (293.921203ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-761711","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-761711
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-761711: (2.027489614s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-761711 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-761711 --no-kubernetes --driver=docker  --container-runtime=crio: (9.176018143s)
--- PASS: TestNoKubernetes/serial/Start (9.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-761711 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-761711 "sudo systemctl is-active --quiet service kubelet": exit status 1 (261.843981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-761711
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-761711: (1.273838556s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-761711 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-761711 --driver=docker  --container-runtime=crio: (8.246802939s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-761711 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-761711 "sudo systemctl is-active --quiet service kubelet": exit status 1 (371.843636ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (85.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3323930156 start -p stopped-upgrade-536273 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0204 18:59:45.103032  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3323930156 start -p stopped-upgrade-536273 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.666681568s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3323930156 -p stopped-upgrade-536273 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3323930156 -p stopped-upgrade-536273 stop: (2.579741855s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-536273 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-536273 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.157974888s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (85.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-536273
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-536273: (1.503531462s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.51s)

                                                
                                    
x
+
TestPause/serial/Start (53.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-907238 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-907238 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (53.410787622s)
--- PASS: TestPause/serial/Start (53.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-907238 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0204 19:03:32.754673  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-907238 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.035979587s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.06s)

                                                
                                    
x
+
TestPause/serial/Pause (1.04s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-907238 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-907238 --alsologtostderr -v=5: (1.037931544s)
--- PASS: TestPause/serial/Pause (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-907238 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-907238 --output=json --layout=cluster: exit status 2 (403.72964ms)

                                                
                                                
-- stdout --
	{"Name":"pause-907238","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-907238","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-907238 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.48s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-907238 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-907238 --alsologtostderr -v=5: (1.482738501s)
--- PASS: TestPause/serial/PauseAgain (1.48s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-907238 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-907238 --alsologtostderr -v=5: (3.400536162s)
--- PASS: TestPause/serial/DeletePaused (3.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (2.659925431s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-907238
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-907238: exit status 1 (21.291144ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-907238: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-413897 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-413897 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (263.627493ms)

                                                
                                                
-- stdout --
	* [false-413897] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20345
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0204 19:04:40.027288  484775 out.go:345] Setting OutFile to fd 1 ...
	I0204 19:04:40.027515  484775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 19:04:40.027527  484775 out.go:358] Setting ErrFile to fd 2...
	I0204 19:04:40.027533  484775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0204 19:04:40.027827  484775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20345-299426/.minikube/bin
	I0204 19:04:40.028367  484775 out.go:352] Setting JSON to false
	I0204 19:04:40.029412  484775 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10029,"bootTime":1738685851,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0204 19:04:40.029522  484775 start.go:139] virtualization:  
	I0204 19:04:40.034545  484775 out.go:177] * [false-413897] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0204 19:04:40.038172  484775 out.go:177]   - MINIKUBE_LOCATION=20345
	I0204 19:04:40.038393  484775 notify.go:220] Checking for updates...
	I0204 19:04:40.045222  484775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0204 19:04:40.049206  484775 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20345-299426/kubeconfig
	I0204 19:04:40.052004  484775 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20345-299426/.minikube
	I0204 19:04:40.055198  484775 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0204 19:04:40.058401  484775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0204 19:04:40.061701  484775 driver.go:394] Setting default libvirt URI to qemu:///system
	I0204 19:04:40.098302  484775 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0204 19:04:40.098489  484775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0204 19:04:40.188739  484775 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-02-04 19:04:40.177371729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0204 19:04:40.188860  484775 docker.go:318] overlay module found
	I0204 19:04:40.192020  484775 out.go:177] * Using the docker driver based on user configuration
	I0204 19:04:40.195072  484775 start.go:297] selected driver: docker
	I0204 19:04:40.195101  484775 start.go:901] validating driver "docker" against <nil>
	I0204 19:04:40.195116  484775 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0204 19:04:40.198176  484775 out.go:201] 
	W0204 19:04:40.201399  484775 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0204 19:04:40.204405  484775 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-413897 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-413897" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-413897

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-413897"

                                                
                                                
----------------------- debugLogs end: false-413897 [took: 5.113316108s] --------------------------------
helpers_test.go:175: Cleaning up "false-413897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-413897
--- PASS: TestNetworkPlugins/group/false (5.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (174.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-839666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0204 19:06:35.828315  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:06:42.034078  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-839666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m54.936137834s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (174.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-414061 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-414061 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (1m7.951082308s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-839666 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [99af62ad-ee19-49f0-85ad-ad4e960c9ceb] Pending
helpers_test.go:344: "busybox" [99af62ad-ee19-49f0-85ad-ad4e960c9ceb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [99af62ad-ee19-49f0-85ad-ad4e960c9ceb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003969186s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-839666 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-839666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-839666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.402973346s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-839666 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-839666 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-839666 --alsologtostderr -v=3: (13.588417277s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-839666 -n old-k8s-version-839666
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-839666 -n old-k8s-version-839666: exit status 7 (97.768399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-839666 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (153.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-839666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-839666 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m32.775430092s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-839666 -n old-k8s-version-839666
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (153.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-414061 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3c201e2b-7c0e-4289-878f-23bba865e160] Pending
helpers_test.go:344: "busybox" [3c201e2b-7c0e-4289-878f-23bba865e160] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3c201e2b-7c0e-4289-878f-23bba865e160] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005318471s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-414061 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-414061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-414061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.482104205s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-414061 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-414061 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-414061 --alsologtostderr -v=3: (12.395468878s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-414061 -n no-preload-414061
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-414061 -n no-preload-414061: exit status 7 (80.498185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-414061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-414061 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0204 19:11:42.034011  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-414061 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m26.370844733s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-414061 -n no-preload-414061
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5f2t8" [26bacc14-ad96-424b-90ca-d019716042fc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004607034s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5f2t8" [26bacc14-ad96-424b-90ca-d019716042fc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004310017s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-839666 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-839666 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-839666 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-839666 -n old-k8s-version-839666
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-839666 -n old-k8s-version-839666: exit status 2 (354.9257ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-839666 -n old-k8s-version-839666
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-839666 -n old-k8s-version-839666: exit status 2 (335.888528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-839666 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-839666 -n old-k8s-version-839666
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-839666 -n old-k8s-version-839666
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-640359 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-640359 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (48.417993063s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-640359 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [56fa8dd4-5940-4f9f-a0dc-c8b0e29614c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [56fa8dd4-5940-4f9f-a0dc-c8b0e29614c2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00344116s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-640359 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-640359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-640359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.019900439s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-640359 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-640359 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-640359 --alsologtostderr -v=3: (11.946225975s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-640359 -n embed-certs-640359
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-640359 -n embed-certs-640359: exit status 7 (68.95275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-640359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (272.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-640359 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0204 19:13:32.755310  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:54.568857  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:54.575656  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:54.587410  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:54.608816  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:54.650177  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:54.731646  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:54.893664  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:55.215504  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:55.857530  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:57.139676  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:13:59.701944  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:14:04.823534  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:14:15.064936  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:14:35.546630  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-640359 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m32.126230609s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-640359 -n embed-certs-640359
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (272.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-m8ld8" [44848b92-d120-4a09-bf99-1fbe863697ea] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004010411s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-m8ld8" [44848b92-d120-4a09-bf99-1fbe863697ea] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007106238s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-414061 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-414061 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-414061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-414061 -n no-preload-414061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-414061 -n no-preload-414061: exit status 2 (335.313418ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-414061 -n no-preload-414061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-414061 -n no-preload-414061: exit status 2 (327.726987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-414061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-414061 -n no-preload-414061
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-414061 -n no-preload-414061
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-714718 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0204 19:15:16.508212  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-714718 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (51.895458998s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-714718 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [176dff0e-49c7-4a3f-8c27-d1821fe5c54c] Pending
helpers_test.go:344: "busybox" [176dff0e-49c7-4a3f-8c27-d1821fe5c54c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003081896s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-714718 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-714718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-714718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.07113863s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-714718 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-714718 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-714718 --alsologtostderr -v=3: (11.983567223s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-714718 -n default-k8s-diff-port-714718
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-714718 -n default-k8s-diff-port-714718: exit status 7 (81.790083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-714718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-714718 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0204 19:16:25.104819  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:16:38.430400  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:16:42.034110  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-714718 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m36.770660044s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-714718 -n default-k8s-diff-port-714718
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-n4rtr" [8944e8a7-e94d-43d7-baac-1501d70044ef] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003685717s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-n4rtr" [8944e8a7-e94d-43d7-baac-1501d70044ef] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005119144s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-640359 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-640359 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-640359 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-640359 -n embed-certs-640359
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-640359 -n embed-certs-640359: exit status 2 (340.455855ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-640359 -n embed-certs-640359
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-640359 -n embed-certs-640359: exit status 2 (328.604887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-640359 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-640359 -n embed-certs-640359
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-640359 -n embed-certs-640359
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-635634 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0204 19:18:32.755445  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-635634 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (35.11151486s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-635634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-635634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.450422596s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-635634 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-635634 --alsologtostderr -v=3: (1.354393424s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-635634 -n newest-cni-635634
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-635634 -n newest-cni-635634: exit status 7 (76.386715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-635634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-635634 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0204 19:18:54.567878  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-635634 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (16.101662957s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-635634 -n newest-cni-635634
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-635634 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-635634 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-635634 -n newest-cni-635634
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-635634 -n newest-cni-635634: exit status 2 (350.499455ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-635634 -n newest-cni-635634
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-635634 -n newest-cni-635634: exit status 2 (347.504642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-635634 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-635634 -n newest-cni-635634
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-635634 -n newest-cni-635634
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0204 19:19:22.272940  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/old-k8s-version-839666/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:19:59.489746  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:19:59.496260  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:19:59.507658  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:19:59.529146  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:19:59.570532  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:19:59.652005  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:19:59.813535  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:20:00.167438  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:20:00.809752  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:20:02.091821  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:20:04.653614  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:20:09.775385  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (54.965308571s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-413897 "pgrep -a kubelet"
I0204 19:20:12.001422  304949 config.go:182] Loaded profile config "auto-413897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-413897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7p2pf" [293db962-e192-4bcd-8a7e-6883e27058c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7p2pf" [293db962-e192-4bcd-8a7e-6883e27058c7] Running
E0204 19:20:20.017507  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004078301s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-413897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (55.850659515s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ntpjm" [281c1798-b824-4e97-973b-4e7f41c483e5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004572334s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ntpjm" [281c1798-b824-4e97-973b-4e7f41c483e5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003130995s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-714718 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-714718 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-714718 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-714718 --alsologtostderr -v=1: (1.320171841s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-714718 -n default-k8s-diff-port-714718
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-714718 -n default-k8s-diff-port-714718: exit status 2 (510.482187ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-714718 -n default-k8s-diff-port-714718
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-714718 -n default-k8s-diff-port-714718: exit status 2 (451.541987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-714718 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-714718 --alsologtostderr -v=1: (1.07574967s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-714718 -n default-k8s-diff-port-714718
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-714718 -n default-k8s-diff-port-714718
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.52s)
E0204 19:25:59.664105  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:59.670728  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:59.682609  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:59.704013  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:59.745470  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:59.827162  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:59.988513  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:26:00.311911  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:26:00.953675  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:26:02.235177  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:26:04.797417  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/default-k8s-diff-port-714718/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0204 19:21:42.036400  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/addons-405803/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m14.022289936s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fplv9" [6754e76c-b4e3-48bf-80e8-d65bb93b0e36] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004751005s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-413897 "pgrep -a kubelet"
I0204 19:21:48.455038  304949 config.go:182] Loaded profile config "kindnet-413897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-413897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vzjg5" [c6b2bbeb-79d6-4ad0-a9f4-0129e928880e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vzjg5" [c6b2bbeb-79d6-4ad0-a9f4-0129e928880e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003997385s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-413897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.798011487s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lgr88" [aeb9d617-903b-4a2a-ae76-10581b5193ca] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006788523s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-413897 "pgrep -a kubelet"
I0204 19:22:42.338564  304949 config.go:182] Loaded profile config "calico-413897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-413897 replace --force -f testdata/netcat-deployment.yaml
I0204 19:22:42.793107  304949 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vmlrl" [182ecd67-f5ab-45aa-9100-c881735410a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0204 19:22:43.382433  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-vmlrl" [182ecd67-f5ab-45aa-9100-c881735410a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003658689s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-413897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (45.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (45.56272058s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (45.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-413897 "pgrep -a kubelet"
I0204 19:23:31.939281  304949 config.go:182] Loaded profile config "custom-flannel-413897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-413897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-62wst" [02df0f17-2b06-47a0-abf1-c6346df05d7e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0204 19:23:32.755268  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/functional-289833/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-62wst" [02df0f17-2b06-47a0-abf1-c6346df05d7e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004461853s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-413897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-413897 "pgrep -a kubelet"
I0204 19:24:07.488762  304949 config.go:182] Loaded profile config "enable-default-cni-413897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-413897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jn562" [1416ad29-c5ce-465e-895d-044694045725] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jn562" [1416ad29-c5ce-465e-895d-044694045725] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004445495s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m4.597611206s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-413897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0204 19:24:59.490047  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:12.271624  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:12.278131  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:12.289498  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:12.311248  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:12.352604  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:12.433957  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:12.596051  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:12.917664  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-413897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (50.289777818s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qbbm4" [06587de2-2ca8-417c-9479-758802ea9a47] Running
E0204 19:25:13.559216  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:14.841206  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:17.402601  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004818753s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-413897 "pgrep -a kubelet"
I0204 19:25:19.847016  304949 config.go:182] Loaded profile config "flannel-413897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-413897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-92tw8" [2bf0caf0-c004-4349-8e74-664c84a29047] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0204 19:25:22.524526  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-92tw8" [2bf0caf0-c004-4349-8e74-664c84a29047] Running
E0204 19:25:27.224401  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/no-preload-414061/client.crt: no such file or directory" logger="UnhandledError"
E0204 19:25:32.767126  304949 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/auto-413897/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.003553548s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-413897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-413897 "pgrep -a kubelet"
I0204 19:25:36.654947  304949 config.go:182] Loaded profile config "bridge-413897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-413897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sqwj7" [13e57668-43d9-414c-b2c3-e96fe960979f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sqwj7" [13e57668-43d9-414c-b2c3-e96fe960979f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004038911s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-413897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-413897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                    

Test skip (32/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-153301 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-153301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-153301
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-405803 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-636952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-636952
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-413897 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-413897" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20345-299426/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 04 Feb 2025 19:04:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-163052
contexts:
- context:
cluster: kubernetes-upgrade-163052
extensions:
- extension:
last-update: Tue, 04 Feb 2025 19:04:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: kubernetes-upgrade-163052
name: kubernetes-upgrade-163052
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-163052
user:
client-certificate: /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/kubernetes-upgrade-163052/client.crt
client-key: /home/jenkins/minikube-integration/20345-299426/.minikube/profiles/kubernetes-upgrade-163052/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-413897

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-413897"

                                                
                                                
----------------------- debugLogs end: kubenet-413897 [took: 5.094454788s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-413897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-413897
--- SKIP: TestNetworkPlugins/group/kubenet (5.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-413897 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-413897" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-413897

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-413897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-413897"

                                                
                                                
----------------------- debugLogs end: cilium-413897 [took: 5.633462089s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-413897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-413897
--- SKIP: TestNetworkPlugins/group/cilium (5.82s)

                                                
                                    
Copied to clipboard