Test Report: Docker_Linux_crio_arm64 20109

                    
                      a80036b9799ef97ff87d49db0998430356d1f02a:2025-01-20:37996
                    
                

Test fail (1/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 153.27
x
+
TestAddons/parallel/Ingress (153.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-483552 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-483552 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-483552 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b144af56-3541-4a00-8b1b-bf5fe4d93433] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b144af56-3541-4a00-8b1b-bf5fe4d93433] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009813658s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-483552 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.168540576s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-483552 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-483552
helpers_test.go:235: (dbg) docker inspect addons-483552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61755a0b0b5e8584c2f41d807cbd42398facd6d3eed9953bf1fa602f0fa1cb5b",
	        "Created": "2025-01-20T18:10:14.203127551Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 305800,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-20T18:10:14.357908184Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/61755a0b0b5e8584c2f41d807cbd42398facd6d3eed9953bf1fa602f0fa1cb5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61755a0b0b5e8584c2f41d807cbd42398facd6d3eed9953bf1fa602f0fa1cb5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/61755a0b0b5e8584c2f41d807cbd42398facd6d3eed9953bf1fa602f0fa1cb5b/hosts",
	        "LogPath": "/var/lib/docker/containers/61755a0b0b5e8584c2f41d807cbd42398facd6d3eed9953bf1fa602f0fa1cb5b/61755a0b0b5e8584c2f41d807cbd42398facd6d3eed9953bf1fa602f0fa1cb5b-json.log",
	        "Name": "/addons-483552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-483552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-483552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/31718ee015db8d8aebb8129770826b3ecf3d4ff869cc5060caf1e7a17cfdccac-init/diff:/var/lib/docker/overlay2/46575390215fedaa6bd070b2a90e3837a745f97d8b854d3a6d816c050d310110/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31718ee015db8d8aebb8129770826b3ecf3d4ff869cc5060caf1e7a17cfdccac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31718ee015db8d8aebb8129770826b3ecf3d4ff869cc5060caf1e7a17cfdccac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31718ee015db8d8aebb8129770826b3ecf3d4ff869cc5060caf1e7a17cfdccac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-483552",
	                "Source": "/var/lib/docker/volumes/addons-483552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-483552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-483552",
	                "name.minikube.sigs.k8s.io": "addons-483552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7d5c71615539839dc4d5b0b0b70ae683e2d91e4106025e9025accef02a4c472",
	            "SandboxKey": "/var/run/docker/netns/d7d5c7161553",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-483552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f071dba583a75017abec4c9ca0f3120c7c4b45a2a7cd7d97651b78cdbf1e8613",
	                    "EndpointID": "90307b2d4e48491678216539b8c5dabf727d35435f59fa2b71d26cc21ca01d64",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-483552",
	                        "61755a0b0b5e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-483552 -n addons-483552
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-483552 logs -n 25: (1.695656237s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-689857                                                                     | download-only-689857   | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC | 20 Jan 25 18:09 UTC |
	| start   | --download-only -p                                                                          | download-docker-991041 | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC |                     |
	|         | download-docker-991041                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-991041                                                                   | download-docker-991041 | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC | 20 Jan 25 18:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-052288   | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC |                     |
	|         | binary-mirror-052288                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46851                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-052288                                                                     | binary-mirror-052288   | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC | 20 Jan 25 18:09 UTC |
	| addons  | disable dashboard -p                                                                        | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC |                     |
	|         | addons-483552                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC |                     |
	|         | addons-483552                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-483552 --wait=true                                                                | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC | 20 Jan 25 18:12 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-483552 addons disable                                                                | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:12 UTC | 20 Jan 25 18:12 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-483552 addons disable                                                                | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:13 UTC | 20 Jan 25 18:13 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:13 UTC | 20 Jan 25 18:13 UTC |
	|         | -p addons-483552                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-483552 addons disable                                                                | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:13 UTC | 20 Jan 25 18:13 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-483552 ip                                                                            | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:13 UTC | 20 Jan 25 18:13 UTC |
	| addons  | addons-483552 addons disable                                                                | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:13 UTC | 20 Jan 25 18:13 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-483552 addons                                                                        | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:13 UTC | 20 Jan 25 18:13 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-483552 addons                                                                        | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:13 UTC | 20 Jan 25 18:13 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-483552 ssh curl -s                                                                   | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-483552 addons                                                                        | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:14 UTC | 20 Jan 25 18:14 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-483552 addons                                                                        | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:14 UTC | 20 Jan 25 18:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-483552 addons                                                                        | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:14 UTC | 20 Jan 25 18:14 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-483552 addons disable                                                                | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:14 UTC | 20 Jan 25 18:14 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-483552 ssh cat                                                                       | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:14 UTC | 20 Jan 25 18:14 UTC |
	|         | /opt/local-path-provisioner/pvc-33814caa-87d3-4ef4-8953-290c67f6d8c4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-483552 addons disable                                                                | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:14 UTC | 20 Jan 25 18:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-483552 addons                                                                        | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:15 UTC | 20 Jan 25 18:15 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-483552 ip                                                                            | addons-483552          | jenkins | v1.35.0 | 20 Jan 25 18:16 UTC | 20 Jan 25 18:16 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 18:09:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 18:09:47.859734  305308 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:09:47.859851  305308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:09:47.859862  305308 out.go:358] Setting ErrFile to fd 2...
	I0120 18:09:47.859867  305308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:09:47.860098  305308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	I0120 18:09:47.860511  305308 out.go:352] Setting JSON to false
	I0120 18:09:47.861351  305308 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6732,"bootTime":1737389856,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0120 18:09:47.861419  305308 start.go:139] virtualization:  
	I0120 18:09:47.864943  305308 out.go:177] * [addons-483552] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 18:09:47.867879  305308 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 18:09:47.867938  305308 notify.go:220] Checking for updates...
	I0120 18:09:47.874481  305308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 18:09:47.877267  305308 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	I0120 18:09:47.880190  305308 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	I0120 18:09:47.883070  305308 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 18:09:47.885890  305308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 18:09:47.888950  305308 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 18:09:47.918677  305308 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 18:09:47.918794  305308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:09:47.973549  305308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-01-20 18:09:47.964324222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:09:47.973662  305308 docker.go:318] overlay module found
	I0120 18:09:47.976810  305308 out.go:177] * Using the docker driver based on user configuration
	I0120 18:09:47.979669  305308 start.go:297] selected driver: docker
	I0120 18:09:47.979689  305308 start.go:901] validating driver "docker" against <nil>
	I0120 18:09:47.979703  305308 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 18:09:47.980460  305308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:09:48.033576  305308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-01-20 18:09:48.02413471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:09:48.033829  305308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 18:09:48.034169  305308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 18:09:48.037201  305308 out.go:177] * Using Docker driver with root privileges
	I0120 18:09:48.040094  305308 cni.go:84] Creating CNI manager for ""
	I0120 18:09:48.040186  305308 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0120 18:09:48.040201  305308 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 18:09:48.040292  305308 start.go:340] cluster config:
	{Name:addons-483552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-483552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0120 18:09:48.043439  305308 out.go:177] * Starting "addons-483552" primary control-plane node in "addons-483552" cluster
	I0120 18:09:48.046406  305308 cache.go:121] Beginning downloading kic base image for docker with crio
	I0120 18:09:48.049457  305308 out.go:177] * Pulling base image v0.0.46 ...
	I0120 18:09:48.052359  305308 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 18:09:48.052451  305308 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I0120 18:09:48.052465  305308 cache.go:56] Caching tarball of preloaded images
	I0120 18:09:48.052460  305308 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 18:09:48.052581  305308 preload.go:172] Found /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0120 18:09:48.052593  305308 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 18:09:48.053061  305308 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/config.json ...
	I0120 18:09:48.053111  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/config.json: {Name:mk95ff52b47f5fa3b298542cb6c6e88b02a1e739 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:09:48.069673  305308 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 18:09:48.069839  305308 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0120 18:09:48.069861  305308 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0120 18:09:48.069868  305308 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0120 18:09:48.069875  305308 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0120 18:09:48.069881  305308 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I0120 18:10:05.763480  305308 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
	I0120 18:10:05.763522  305308 cache.go:227] Successfully downloaded all kic artifacts
	I0120 18:10:05.763565  305308 start.go:360] acquireMachinesLock for addons-483552: {Name:mk28f5c6b58f5ac8cd13049b0be51d40ce297ce6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 18:10:05.763709  305308 start.go:364] duration metric: took 104.571µs to acquireMachinesLock for "addons-483552"
	I0120 18:10:05.763738  305308 start.go:93] Provisioning new machine with config: &{Name:addons-483552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-483552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 18:10:05.763812  305308 start.go:125] createHost starting for "" (driver="docker")
	I0120 18:10:05.766883  305308 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0120 18:10:05.767145  305308 start.go:159] libmachine.API.Create for "addons-483552" (driver="docker")
	I0120 18:10:05.767181  305308 client.go:168] LocalClient.Create starting
	I0120 18:10:05.767291  305308 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20109-299163/.minikube/certs/ca.pem
	I0120 18:10:06.550337  305308 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20109-299163/.minikube/certs/cert.pem
	I0120 18:10:07.789580  305308 cli_runner.go:164] Run: docker network inspect addons-483552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0120 18:10:07.805308  305308 cli_runner.go:211] docker network inspect addons-483552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0120 18:10:07.805398  305308 network_create.go:284] running [docker network inspect addons-483552] to gather additional debugging logs...
	I0120 18:10:07.805419  305308 cli_runner.go:164] Run: docker network inspect addons-483552
	W0120 18:10:07.821346  305308 cli_runner.go:211] docker network inspect addons-483552 returned with exit code 1
	I0120 18:10:07.821382  305308 network_create.go:287] error running [docker network inspect addons-483552]: docker network inspect addons-483552: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-483552 not found
	I0120 18:10:07.821402  305308 network_create.go:289] output of [docker network inspect addons-483552]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-483552 not found
	
	** /stderr **
	I0120 18:10:07.821498  305308 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0120 18:10:07.837914  305308 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195b080}
	I0120 18:10:07.837959  305308 network_create.go:124] attempt to create docker network addons-483552 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0120 18:10:07.838018  305308 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-483552 addons-483552
	I0120 18:10:07.910483  305308 network_create.go:108] docker network addons-483552 192.168.49.0/24 created
	I0120 18:10:07.910522  305308 kic.go:121] calculated static IP "192.168.49.2" for the "addons-483552" container
	I0120 18:10:07.910605  305308 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0120 18:10:07.926059  305308 cli_runner.go:164] Run: docker volume create addons-483552 --label name.minikube.sigs.k8s.io=addons-483552 --label created_by.minikube.sigs.k8s.io=true
	I0120 18:10:07.943939  305308 oci.go:103] Successfully created a docker volume addons-483552
	I0120 18:10:07.944035  305308 cli_runner.go:164] Run: docker run --rm --name addons-483552-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-483552 --entrypoint /usr/bin/test -v addons-483552:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0120 18:10:09.963462  305308 cli_runner.go:217] Completed: docker run --rm --name addons-483552-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-483552 --entrypoint /usr/bin/test -v addons-483552:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (2.019383088s)
	I0120 18:10:09.963493  305308 oci.go:107] Successfully prepared a docker volume addons-483552
	I0120 18:10:09.963515  305308 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 18:10:09.963535  305308 kic.go:194] Starting extracting preloaded images to volume ...
	I0120 18:10:09.963609  305308 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-483552:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0120 18:10:14.131319  305308 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-483552:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.167663413s)
	I0120 18:10:14.131355  305308 kic.go:203] duration metric: took 4.167812552s to extract preloaded images to volume ...
	W0120 18:10:14.131510  305308 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0120 18:10:14.131622  305308 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0120 18:10:14.188471  305308 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-483552 --name addons-483552 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-483552 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-483552 --network addons-483552 --ip 192.168.49.2 --volume addons-483552:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0120 18:10:14.519540  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Running}}
	I0120 18:10:14.539160  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:14.564648  305308 cli_runner.go:164] Run: docker exec addons-483552 stat /var/lib/dpkg/alternatives/iptables
	I0120 18:10:14.624839  305308 oci.go:144] the created container "addons-483552" has a running status.
	I0120 18:10:14.624867  305308 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa...
	I0120 18:10:15.349747  305308 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0120 18:10:15.380188  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:15.405599  305308 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0120 18:10:15.405629  305308 kic_runner.go:114] Args: [docker exec --privileged addons-483552 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0120 18:10:15.458551  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:15.478663  305308 machine.go:93] provisionDockerMachine start ...
	I0120 18:10:15.478781  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:15.501514  305308 main.go:141] libmachine: Using SSH client type: native
	I0120 18:10:15.501812  305308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0120 18:10:15.501823  305308 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 18:10:15.629469  305308 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-483552
	
	I0120 18:10:15.629492  305308 ubuntu.go:169] provisioning hostname "addons-483552"
	I0120 18:10:15.629558  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:15.648592  305308 main.go:141] libmachine: Using SSH client type: native
	I0120 18:10:15.648870  305308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0120 18:10:15.648890  305308 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-483552 && echo "addons-483552" | sudo tee /etc/hostname
	I0120 18:10:15.790567  305308 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-483552
	
	I0120 18:10:15.790654  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:15.808640  305308 main.go:141] libmachine: Using SSH client type: native
	I0120 18:10:15.808887  305308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0120 18:10:15.808911  305308 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-483552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-483552/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-483552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 18:10:15.930025  305308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 18:10:15.930055  305308 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20109-299163/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-299163/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-299163/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-299163/.minikube}
	I0120 18:10:15.930076  305308 ubuntu.go:177] setting up certificates
	I0120 18:10:15.930086  305308 provision.go:84] configureAuth start
	I0120 18:10:15.930147  305308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-483552
	I0120 18:10:15.947045  305308 provision.go:143] copyHostCerts
	I0120 18:10:15.947130  305308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-299163/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-299163/.minikube/ca.pem (1082 bytes)
	I0120 18:10:15.947257  305308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-299163/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-299163/.minikube/cert.pem (1123 bytes)
	I0120 18:10:15.947328  305308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-299163/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-299163/.minikube/key.pem (1675 bytes)
	I0120 18:10:15.947392  305308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-299163/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-299163/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-299163/.minikube/certs/ca-key.pem org=jenkins.addons-483552 san=[127.0.0.1 192.168.49.2 addons-483552 localhost minikube]
	I0120 18:10:16.389816  305308 provision.go:177] copyRemoteCerts
	I0120 18:10:16.389883  305308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 18:10:16.389930  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:16.407061  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:16.498383  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 18:10:16.521892  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 18:10:16.545397  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 18:10:16.568864  305308 provision.go:87] duration metric: took 638.764897ms to configureAuth
	I0120 18:10:16.568891  305308 ubuntu.go:193] setting minikube options for container-runtime
	I0120 18:10:16.569080  305308 config.go:182] Loaded profile config "addons-483552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:10:16.569184  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:16.586396  305308 main.go:141] libmachine: Using SSH client type: native
	I0120 18:10:16.586652  305308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0120 18:10:16.586675  305308 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 18:10:16.812728  305308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 18:10:16.812752  305308 machine.go:96] duration metric: took 1.33406732s to provisionDockerMachine
	I0120 18:10:16.812763  305308 client.go:171] duration metric: took 11.045570483s to LocalClient.Create
	I0120 18:10:16.812776  305308 start.go:167] duration metric: took 11.045633743s to libmachine.API.Create "addons-483552"
	I0120 18:10:16.812788  305308 start.go:293] postStartSetup for "addons-483552" (driver="docker")
	I0120 18:10:16.812799  305308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 18:10:16.812870  305308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 18:10:16.812914  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:16.829817  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:16.918673  305308 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 18:10:16.921887  305308 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0120 18:10:16.921926  305308 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0120 18:10:16.921939  305308 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0120 18:10:16.921947  305308 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0120 18:10:16.921963  305308 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-299163/.minikube/addons for local assets ...
	I0120 18:10:16.922037  305308 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-299163/.minikube/files for local assets ...
	I0120 18:10:16.922068  305308 start.go:296] duration metric: took 109.273257ms for postStartSetup
	I0120 18:10:16.922381  305308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-483552
	I0120 18:10:16.939034  305308 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/config.json ...
	I0120 18:10:16.939337  305308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 18:10:16.939401  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:16.956235  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:17.046733  305308 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0120 18:10:17.051247  305308 start.go:128] duration metric: took 11.287419289s to createHost
	I0120 18:10:17.051274  305308 start.go:83] releasing machines lock for "addons-483552", held for 11.287553078s
	I0120 18:10:17.051352  305308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-483552
	I0120 18:10:17.068847  305308 ssh_runner.go:195] Run: cat /version.json
	I0120 18:10:17.068909  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:17.069161  305308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 18:10:17.069230  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:17.093931  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:17.095061  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:17.177204  305308 ssh_runner.go:195] Run: systemctl --version
	I0120 18:10:17.315405  305308 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 18:10:17.456534  305308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0120 18:10:17.460660  305308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 18:10:17.483222  305308 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0120 18:10:17.483330  305308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 18:10:17.522019  305308 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0120 18:10:17.522090  305308 start.go:495] detecting cgroup driver to use...
	I0120 18:10:17.522138  305308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0120 18:10:17.522223  305308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 18:10:17.538118  305308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 18:10:17.549588  305308 docker.go:217] disabling cri-docker service (if available) ...
	I0120 18:10:17.549713  305308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 18:10:17.564277  305308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 18:10:17.579108  305308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 18:10:17.670491  305308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 18:10:17.766162  305308 docker.go:233] disabling docker service ...
	I0120 18:10:17.766230  305308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 18:10:17.786468  305308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 18:10:17.798730  305308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 18:10:17.881305  305308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 18:10:17.987857  305308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 18:10:18.002098  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 18:10:18.022277  305308 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 18:10:18.022355  305308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 18:10:18.033695  305308 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 18:10:18.033874  305308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 18:10:18.045370  305308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 18:10:18.057486  305308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 18:10:18.068323  305308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 18:10:18.079262  305308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 18:10:18.090454  305308 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 18:10:18.107696  305308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 18:10:18.118362  305308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 18:10:18.128017  305308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 18:10:18.137569  305308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 18:10:18.220153  305308 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 18:10:18.329134  305308 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 18:10:18.329249  305308 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 18:10:18.333084  305308 start.go:563] Will wait 60s for crictl version
	I0120 18:10:18.333167  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:10:18.337190  305308 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 18:10:18.377535  305308 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0120 18:10:18.377651  305308 ssh_runner.go:195] Run: crio --version
	I0120 18:10:18.415374  305308 ssh_runner.go:195] Run: crio --version
	I0120 18:10:18.456200  305308 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I0120 18:10:18.459107  305308 cli_runner.go:164] Run: docker network inspect addons-483552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0120 18:10:18.476116  305308 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0120 18:10:18.479404  305308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 18:10:18.489649  305308 kubeadm.go:883] updating cluster {Name:addons-483552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-483552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 18:10:18.489838  305308 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 18:10:18.489911  305308 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 18:10:18.568827  305308 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 18:10:18.568853  305308 crio.go:433] Images already preloaded, skipping extraction
	I0120 18:10:18.568908  305308 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 18:10:18.610355  305308 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 18:10:18.610379  305308 cache_images.go:84] Images are preloaded, skipping loading
	I0120 18:10:18.610387  305308 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.0 crio true true} ...
	I0120 18:10:18.610479  305308 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-483552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:addons-483552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 18:10:18.610574  305308 ssh_runner.go:195] Run: crio config
	I0120 18:10:18.662533  305308 cni.go:84] Creating CNI manager for ""
	I0120 18:10:18.662557  305308 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0120 18:10:18.662570  305308 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 18:10:18.662593  305308 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-483552 NodeName:addons-483552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 18:10:18.662720  305308 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-483552"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 18:10:18.662793  305308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 18:10:18.671708  305308 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 18:10:18.671788  305308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 18:10:18.680361  305308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0120 18:10:18.699056  305308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 18:10:18.717015  305308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0120 18:10:18.735156  305308 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0120 18:10:18.738497  305308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 18:10:18.749551  305308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 18:10:18.831212  305308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 18:10:18.844848  305308 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552 for IP: 192.168.49.2
	I0120 18:10:18.844912  305308 certs.go:194] generating shared ca certs ...
	I0120 18:10:18.844945  305308 certs.go:226] acquiring lock for ca certs: {Name:mke11b5e9e32087f295845c1d91045c0b4ff2dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:18.845100  305308 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-299163/.minikube/ca.key
	I0120 18:10:19.125711  305308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-299163/.minikube/ca.crt ...
	I0120 18:10:19.125741  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/ca.crt: {Name:mkdbd746ae1725340da78e294feb30698e60fb21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:19.125957  305308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-299163/.minikube/ca.key ...
	I0120 18:10:19.125972  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/ca.key: {Name:mk6b9127b3024ad228cdf653a77af586a9fc5191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:19.126061  305308 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-299163/.minikube/proxy-client-ca.key
	I0120 18:10:19.917330  305308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-299163/.minikube/proxy-client-ca.crt ...
	I0120 18:10:19.917362  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/proxy-client-ca.crt: {Name:mk3b29902bcab55ddf88a7bd41b0953414566bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:19.918115  305308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-299163/.minikube/proxy-client-ca.key ...
	I0120 18:10:19.918134  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/proxy-client-ca.key: {Name:mk2138b6aed495dab14e2440a8c39ba55739be04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:19.918783  305308 certs.go:256] generating profile certs ...
	I0120 18:10:19.918853  305308 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.key
	I0120 18:10:19.918881  305308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt with IP's: []
	I0120 18:10:20.404751  305308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt ...
	I0120 18:10:20.404788  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: {Name:mk3c7bf11275606ae6e44c4d4ea17a9b69f34962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:20.404974  305308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.key ...
	I0120 18:10:20.404986  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.key: {Name:mka8b3726bd1dfa5e767c06f825082979a75ba1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:20.405079  305308 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.key.da2e48a7
	I0120 18:10:20.405099  305308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.crt.da2e48a7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0120 18:10:20.584835  305308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.crt.da2e48a7 ...
	I0120 18:10:20.584866  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.crt.da2e48a7: {Name:mk10526fc43362711dfbfed446ef8d30271b56a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:20.585043  305308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.key.da2e48a7 ...
	I0120 18:10:20.585058  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.key.da2e48a7: {Name:mk2227c8da7e768fe0bdddf0ed4daa9225adb0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:20.585739  305308 certs.go:381] copying /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.crt.da2e48a7 -> /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.crt
	I0120 18:10:20.585846  305308 certs.go:385] copying /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.key.da2e48a7 -> /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.key
	I0120 18:10:20.585899  305308 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/proxy-client.key
	I0120 18:10:20.585922  305308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/proxy-client.crt with IP's: []
	I0120 18:10:21.275097  305308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/proxy-client.crt ...
	I0120 18:10:21.275128  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/proxy-client.crt: {Name:mk33ff784e42409313296d0602019e7707d1eded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:21.275344  305308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/proxy-client.key ...
	I0120 18:10:21.275359  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/proxy-client.key: {Name:mkff13dcc25b9d502bac897931f1394192aad19d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:21.275552  305308 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-299163/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 18:10:21.275598  305308 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-299163/.minikube/certs/ca.pem (1082 bytes)
	I0120 18:10:21.275628  305308 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-299163/.minikube/certs/cert.pem (1123 bytes)
	I0120 18:10:21.275660  305308 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-299163/.minikube/certs/key.pem (1675 bytes)
	I0120 18:10:21.276261  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 18:10:21.301293  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 18:10:21.325165  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 18:10:21.348527  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 18:10:21.372411  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 18:10:21.395867  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 18:10:21.419257  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 18:10:21.442409  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 18:10:21.466534  305308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-299163/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 18:10:21.490276  305308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 18:10:21.508720  305308 ssh_runner.go:195] Run: openssl version
	I0120 18:10:21.514467  305308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 18:10:21.524392  305308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 18:10:21.527997  305308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 18:10 /usr/share/ca-certificates/minikubeCA.pem
	I0120 18:10:21.528069  305308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 18:10:21.535094  305308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 18:10:21.544726  305308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 18:10:21.548223  305308 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 18:10:21.548274  305308 kubeadm.go:392] StartCluster: {Name:addons-483552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-483552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 18:10:21.548372  305308 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 18:10:21.548436  305308 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 18:10:21.586155  305308 cri.go:89] found id: ""
	I0120 18:10:21.586233  305308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 18:10:21.595185  305308 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 18:10:21.603946  305308 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0120 18:10:21.604029  305308 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 18:10:21.612681  305308 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 18:10:21.612743  305308 kubeadm.go:157] found existing configuration files:
	
	I0120 18:10:21.612804  305308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 18:10:21.621676  305308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 18:10:21.621776  305308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 18:10:21.630613  305308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 18:10:21.639914  305308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 18:10:21.640000  305308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 18:10:21.648586  305308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 18:10:21.657446  305308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 18:10:21.657538  305308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 18:10:21.665996  305308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 18:10:21.674647  305308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 18:10:21.674717  305308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 18:10:21.682995  305308 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0120 18:10:21.726686  305308 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 18:10:21.726748  305308 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 18:10:21.744393  305308 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0120 18:10:21.744473  305308 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0120 18:10:21.744511  305308 kubeadm.go:310] OS: Linux
	I0120 18:10:21.744561  305308 kubeadm.go:310] CGROUPS_CPU: enabled
	I0120 18:10:21.744613  305308 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0120 18:10:21.744665  305308 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0120 18:10:21.744717  305308 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0120 18:10:21.744767  305308 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0120 18:10:21.744819  305308 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0120 18:10:21.744869  305308 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0120 18:10:21.744921  305308 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0120 18:10:21.744970  305308 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0120 18:10:21.809915  305308 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 18:10:21.810030  305308 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 18:10:21.810126  305308 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 18:10:21.816883  305308 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 18:10:21.823571  305308 out.go:235]   - Generating certificates and keys ...
	I0120 18:10:21.823674  305308 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 18:10:21.823750  305308 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 18:10:22.018495  305308 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 18:10:22.929909  305308 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 18:10:23.191154  305308 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 18:10:24.036444  305308 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 18:10:25.215090  305308 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 18:10:25.215443  305308 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-483552 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0120 18:10:26.100877  305308 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 18:10:26.101010  305308 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-483552 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0120 18:10:26.515531  305308 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 18:10:26.797013  305308 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 18:10:27.372865  305308 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 18:10:27.373114  305308 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 18:10:27.963102  305308 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 18:10:28.507940  305308 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 18:10:28.792224  305308 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 18:10:30.003820  305308 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 18:10:30.418849  305308 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 18:10:30.419728  305308 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 18:10:30.422720  305308 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 18:10:30.426354  305308 out.go:235]   - Booting up control plane ...
	I0120 18:10:30.426456  305308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 18:10:30.426533  305308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 18:10:30.426609  305308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 18:10:30.435678  305308 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 18:10:30.442114  305308 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 18:10:30.442350  305308 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 18:10:30.540876  305308 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 18:10:30.541000  305308 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 18:10:31.541947  305308 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000904929s
	I0120 18:10:31.542077  305308 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 18:10:38.046166  305308 kubeadm.go:310] [api-check] The API server is healthy after 6.502418908s
	I0120 18:10:38.067389  305308 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 18:10:38.085693  305308 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 18:10:38.113987  305308 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 18:10:38.114203  305308 kubeadm.go:310] [mark-control-plane] Marking the node addons-483552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 18:10:38.126700  305308 kubeadm.go:310] [bootstrap-token] Using token: wnls3k.x658g5lm42hcuvof
	I0120 18:10:38.129646  305308 out.go:235]   - Configuring RBAC rules ...
	I0120 18:10:38.129804  305308 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 18:10:38.133815  305308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 18:10:38.143846  305308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 18:10:38.150353  305308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 18:10:38.154305  305308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 18:10:38.158954  305308 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 18:10:38.450942  305308 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 18:10:38.887853  305308 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 18:10:39.450837  305308 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 18:10:39.452044  305308 kubeadm.go:310] 
	I0120 18:10:39.452112  305308 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 18:10:39.452118  305308 kubeadm.go:310] 
	I0120 18:10:39.452195  305308 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 18:10:39.452200  305308 kubeadm.go:310] 
	I0120 18:10:39.452226  305308 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 18:10:39.452285  305308 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 18:10:39.452339  305308 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 18:10:39.452343  305308 kubeadm.go:310] 
	I0120 18:10:39.452397  305308 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 18:10:39.452401  305308 kubeadm.go:310] 
	I0120 18:10:39.452448  305308 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 18:10:39.452452  305308 kubeadm.go:310] 
	I0120 18:10:39.452504  305308 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 18:10:39.452579  305308 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 18:10:39.452654  305308 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 18:10:39.452660  305308 kubeadm.go:310] 
	I0120 18:10:39.452748  305308 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 18:10:39.452829  305308 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 18:10:39.452838  305308 kubeadm.go:310] 
	I0120 18:10:39.452922  305308 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wnls3k.x658g5lm42hcuvof \
	I0120 18:10:39.453025  305308 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72b9d59ac86c4df6c7dd7979a2d23f350a38478561fb9ee714c2b0d91a1011fd \
	I0120 18:10:39.453050  305308 kubeadm.go:310] 	--control-plane 
	I0120 18:10:39.453056  305308 kubeadm.go:310] 
	I0120 18:10:39.453140  305308 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 18:10:39.453144  305308 kubeadm.go:310] 
	I0120 18:10:39.453225  305308 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wnls3k.x658g5lm42hcuvof \
	I0120 18:10:39.453327  305308 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:72b9d59ac86c4df6c7dd7979a2d23f350a38478561fb9ee714c2b0d91a1011fd 
	I0120 18:10:39.457167  305308 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0120 18:10:39.457446  305308 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0120 18:10:39.457562  305308 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 18:10:39.457596  305308 cni.go:84] Creating CNI manager for ""
	I0120 18:10:39.457616  305308 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0120 18:10:39.462638  305308 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0120 18:10:39.465601  305308 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0120 18:10:39.469274  305308 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 18:10:39.469295  305308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0120 18:10:39.486429  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 18:10:39.757642  305308 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 18:10:39.757797  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:39.757907  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-483552 minikube.k8s.io/updated_at=2025_01_20T18_10_39_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=addons-483552 minikube.k8s.io/primary=true
	I0120 18:10:39.772823  305308 ops.go:34] apiserver oom_adj: -16
	I0120 18:10:39.895332  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:40.396027  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:40.895385  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:41.395410  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:41.895977  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:42.395758  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:42.896170  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:43.395478  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:43.895448  305308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 18:10:43.998516  305308 kubeadm.go:1113] duration metric: took 4.240779676s to wait for elevateKubeSystemPrivileges
	I0120 18:10:43.998545  305308 kubeadm.go:394] duration metric: took 22.450274541s to StartCluster
	I0120 18:10:43.998562  305308 settings.go:142] acquiring lock: {Name:mkd6b65b8eefb7d5e9ed2e5a7efb42d9619b6fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:43.998671  305308 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-299163/kubeconfig
	I0120 18:10:43.999074  305308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-299163/kubeconfig: {Name:mk929896791b3cbce20e7164b3f4454c2898a5f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 18:10:43.999266  305308 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 18:10:43.999453  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 18:10:43.999700  305308 config.go:182] Loaded profile config "addons-483552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:10:43.999739  305308 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0120 18:10:43.999826  305308 addons.go:69] Setting yakd=true in profile "addons-483552"
	I0120 18:10:43.999845  305308 addons.go:238] Setting addon yakd=true in "addons-483552"
	I0120 18:10:43.999870  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.000427  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.000911  305308 addons.go:69] Setting metrics-server=true in profile "addons-483552"
	I0120 18:10:44.000935  305308 addons.go:238] Setting addon metrics-server=true in "addons-483552"
	I0120 18:10:44.000970  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.001413  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.001680  305308 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-483552"
	I0120 18:10:44.001699  305308 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-483552"
	I0120 18:10:44.001725  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.002220  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.006673  305308 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-483552"
	I0120 18:10:44.008125  305308 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-483552"
	I0120 18:10:44.008565  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.012402  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.007645  305308 addons.go:69] Setting registry=true in profile "addons-483552"
	I0120 18:10:44.012908  305308 addons.go:238] Setting addon registry=true in "addons-483552"
	I0120 18:10:44.012980  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.013578  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.007659  305308 addons.go:69] Setting storage-provisioner=true in profile "addons-483552"
	I0120 18:10:44.007672  305308 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-483552"
	I0120 18:10:44.007857  305308 addons.go:69] Setting volcano=true in profile "addons-483552"
	I0120 18:10:44.007867  305308 addons.go:69] Setting volumesnapshots=true in profile "addons-483552"
	I0120 18:10:44.008312  305308 out.go:177] * Verifying Kubernetes components...
	I0120 18:10:44.014629  305308 addons.go:69] Setting cloud-spanner=true in profile "addons-483552"
	I0120 18:10:44.015208  305308 addons.go:238] Setting addon cloud-spanner=true in "addons-483552"
	I0120 18:10:44.015291  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.016234  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.014642  305308 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-483552"
	I0120 18:10:44.017218  305308 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-483552"
	I0120 18:10:44.017279  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.019842  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.014646  305308 addons.go:69] Setting default-storageclass=true in profile "addons-483552"
	I0120 18:10:44.036116  305308 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-483552"
	I0120 18:10:44.036459  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.014652  305308 addons.go:69] Setting gcp-auth=true in profile "addons-483552"
	I0120 18:10:44.057374  305308 mustload.go:65] Loading cluster: addons-483552
	I0120 18:10:44.057582  305308 config.go:182] Loaded profile config "addons-483552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:10:44.014656  305308 addons.go:69] Setting ingress=true in profile "addons-483552"
	I0120 18:10:44.057964  305308 addons.go:238] Setting addon ingress=true in "addons-483552"
	I0120 18:10:44.058010  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.058429  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.014660  305308 addons.go:69] Setting ingress-dns=true in profile "addons-483552"
	I0120 18:10:44.080065  305308 addons.go:238] Setting addon ingress-dns=true in "addons-483552"
	I0120 18:10:44.080116  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.080579  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.014664  305308 addons.go:69] Setting inspektor-gadget=true in profile "addons-483552"
	I0120 18:10:44.103943  305308 addons.go:238] Setting addon inspektor-gadget=true in "addons-483552"
	I0120 18:10:44.103991  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.104490  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.107984  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.015028  305308 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-483552"
	I0120 18:10:44.122102  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.015043  305308 addons.go:238] Setting addon volcano=true in "addons-483552"
	I0120 18:10:44.129979  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.130498  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.015055  305308 addons.go:238] Setting addon storage-provisioner=true in "addons-483552"
	I0120 18:10:44.169409  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.170026  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.189300  305308 addons.go:238] Setting addon default-storageclass=true in "addons-483552"
	I0120 18:10:44.189360  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.189850  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.015433  305308 addons.go:238] Setting addon volumesnapshots=true in "addons-483552"
	I0120 18:10:44.217988  305308 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0120 18:10:44.218113  305308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 18:10:44.221259  305308 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 18:10:44.221281  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0120 18:10:44.221412  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.234966  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.235469  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.268884  305308 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0120 18:10:44.269074  305308 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0120 18:10:44.276354  305308 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 18:10:44.276446  305308 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 18:10:44.276550  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.285350  305308 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 18:10:44.287823  305308 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0120 18:10:44.292742  305308 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0120 18:10:44.295221  305308 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0120 18:10:44.301624  305308 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0120 18:10:44.302349  305308 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0120 18:10:44.302374  305308 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0120 18:10:44.302441  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.314368  305308 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 18:10:44.314435  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0120 18:10:44.314531  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.323935  305308 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 18:10:44.328891  305308 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 18:10:44.328918  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0120 18:10:44.328987  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.343073  305308 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0120 18:10:44.346624  305308 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 18:10:44.346687  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0120 18:10:44.346769  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.359335  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 18:10:44.370807  305308 out.go:177]   - Using image docker.io/registry:2.8.3
	I0120 18:10:44.374008  305308 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0120 18:10:44.374074  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0120 18:10:44.374163  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.379212  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.392922  305308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W0120 18:10:44.393224  305308 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0120 18:10:44.393413  305308 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0120 18:10:44.393426  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0120 18:10:44.393487  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.403031  305308 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-483552"
	I0120 18:10:44.403072  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:44.428114  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:44.432782  305308 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 18:10:44.433083  305308 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 18:10:44.433107  305308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 18:10:44.433168  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.439299  305308 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0120 18:10:44.439970  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.442417  305308 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 18:10:44.442439  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 18:10:44.442514  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.458699  305308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0120 18:10:44.461704  305308 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0120 18:10:44.466869  305308 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0120 18:10:44.473868  305308 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0120 18:10:44.473905  305308 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0120 18:10:44.473962  305308 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0120 18:10:44.474027  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.478114  305308 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0120 18:10:44.478135  305308 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0120 18:10:44.478212  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.491488  305308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0120 18:10:44.499669  305308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0120 18:10:44.502521  305308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0120 18:10:44.509065  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.516890  305308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0120 18:10:44.519874  305308 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0120 18:10:44.519938  305308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0120 18:10:44.520021  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.581380  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.586200  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.591978  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.603636  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.623688  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.625863  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.642649  305308 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0120 18:10:44.650083  305308 out.go:177]   - Using image docker.io/busybox:stable
	I0120 18:10:44.656726  305308 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 18:10:44.656752  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0120 18:10:44.656832  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:44.671974  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.675447  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.689961  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.706016  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.706402  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.740957  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:44.895913  305308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 18:10:44.979324  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 18:10:45.001029  305308 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 18:10:45.001054  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0120 18:10:45.108413  305308 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0120 18:10:45.108508  305308 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0120 18:10:45.115644  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 18:10:45.152158  305308 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0120 18:10:45.152256  305308 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0120 18:10:45.182729  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0120 18:10:45.198806  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 18:10:45.210360  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 18:10:45.214523  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 18:10:45.249027  305308 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0120 18:10:45.249117  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0120 18:10:45.276049  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 18:10:45.281622  305308 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 18:10:45.281838  305308 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 18:10:45.286960  305308 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0120 18:10:45.287121  305308 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0120 18:10:45.289260  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 18:10:45.298672  305308 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0120 18:10:45.298756  305308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0120 18:10:45.340288  305308 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0120 18:10:45.340407  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0120 18:10:45.380815  305308 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0120 18:10:45.380903  305308 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0120 18:10:45.407685  305308 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0120 18:10:45.407768  305308 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0120 18:10:45.420141  305308 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 18:10:45.420225  305308 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 18:10:45.460994  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0120 18:10:45.471646  305308 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0120 18:10:45.471721  305308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0120 18:10:45.560907  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0120 18:10:45.571093  305308 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0120 18:10:45.571173  305308 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0120 18:10:45.581047  305308 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0120 18:10:45.581129  305308 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0120 18:10:45.620518  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 18:10:45.706147  305308 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0120 18:10:45.706230  305308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0120 18:10:45.742982  305308 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0120 18:10:45.743066  305308 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0120 18:10:45.780778  305308 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0120 18:10:45.780857  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0120 18:10:45.934069  305308 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0120 18:10:45.934151  305308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0120 18:10:45.944767  305308 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 18:10:45.944841  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0120 18:10:46.030815  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0120 18:10:46.088462  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 18:10:46.101024  305308 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0120 18:10:46.101105  305308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0120 18:10:46.252358  305308 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0120 18:10:46.252449  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0120 18:10:46.346829  305308 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0120 18:10:46.346911  305308 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0120 18:10:46.466969  305308 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0120 18:10:46.467044  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0120 18:10:46.611562  305308 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.252190801s)
	I0120 18:10:46.611589  305308 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0120 18:10:46.612685  305308 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.716742298s)
	I0120 18:10:46.613410  305308 node_ready.go:35] waiting up to 6m0s for node "addons-483552" to be "Ready" ...
	I0120 18:10:46.636212  305308 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0120 18:10:46.636285  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0120 18:10:46.854688  305308 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 18:10:46.854765  305308 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0120 18:10:47.056853  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 18:10:47.328396  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.349032315s)
	I0120 18:10:47.328489  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.212819485s)
	I0120 18:10:47.517372  305308 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-483552" context rescaled to 1 replicas
	I0120 18:10:48.691818  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:10:48.861208  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.67837613s)
	I0120 18:10:49.273999  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.075104622s)
	I0120 18:10:49.274310  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.063826624s)
	I0120 18:10:50.137692  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.923075105s)
	I0120 18:10:50.137776  305308 addons.go:479] Verifying addon ingress=true in "addons-483552"
	I0120 18:10:50.137940  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.861819622s)
	I0120 18:10:50.138084  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.848736363s)
	I0120 18:10:50.138192  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.6771201s)
	I0120 18:10:50.138291  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.517698045s)
	I0120 18:10:50.138598  305308 addons.go:479] Verifying addon metrics-server=true in "addons-483552"
	I0120 18:10:50.138325  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.107437154s)
	I0120 18:10:50.138351  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.577241188s)
	I0120 18:10:50.138875  305308 addons.go:479] Verifying addon registry=true in "addons-483552"
	I0120 18:10:50.142091  305308 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-483552 service yakd-dashboard -n yakd-dashboard
	
	I0120 18:10:50.142144  305308 out.go:177] * Verifying ingress addon...
	I0120 18:10:50.142111  305308 out.go:177] * Verifying registry addon...
	I0120 18:10:50.146785  305308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0120 18:10:50.147701  305308 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0120 18:10:50.173398  305308 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0120 18:10:50.173437  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:50.175157  305308 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0120 18:10:50.175190  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:50.304120  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.215565513s)
	W0120 18:10:50.304160  305308 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 18:10:50.304191  305308 retry.go:31] will retry after 138.730848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 18:10:50.443364  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 18:10:50.620930  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.564009549s)
	I0120 18:10:50.621025  305308 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-483552"
	I0120 18:10:50.625584  305308 out.go:177] * Verifying csi-hostpath-driver addon...
	I0120 18:10:50.629428  305308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0120 18:10:50.654799  305308 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0120 18:10:50.654873  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:50.681730  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:50.682443  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:51.117049  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:10:51.133688  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:51.150224  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:51.152877  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:51.633527  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:51.651209  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:51.651780  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:52.133307  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:52.149672  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:52.151280  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:52.327183  305308 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0120 18:10:52.327278  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:52.343980  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:52.443325  305308 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0120 18:10:52.460991  305308 addons.go:238] Setting addon gcp-auth=true in "addons-483552"
	I0120 18:10:52.461043  305308 host.go:66] Checking if "addons-483552" exists ...
	I0120 18:10:52.461523  305308 cli_runner.go:164] Run: docker container inspect addons-483552 --format={{.State.Status}}
	I0120 18:10:52.479558  305308 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0120 18:10:52.479617  305308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-483552
	I0120 18:10:52.496272  305308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/addons-483552/id_rsa Username:docker}
	I0120 18:10:52.633889  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:52.650803  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:52.653176  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:53.118412  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:10:53.133203  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:53.153128  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:53.153526  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:53.198984  305308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.755519988s)
	I0120 18:10:53.201863  305308 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 18:10:53.204766  305308 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0120 18:10:53.207570  305308 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0120 18:10:53.207596  305308 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0120 18:10:53.226175  305308 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0120 18:10:53.226243  305308 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0120 18:10:53.244499  305308 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 18:10:53.244527  305308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0120 18:10:53.262004  305308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 18:10:53.639824  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:53.708275  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:53.714204  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:53.780194  305308 addons.go:479] Verifying addon gcp-auth=true in "addons-483552"
	I0120 18:10:53.783334  305308 out.go:177] * Verifying gcp-auth addon...
	I0120 18:10:53.787074  305308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0120 18:10:53.803839  305308 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0120 18:10:53.803906  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:54.134275  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:54.150738  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:54.151554  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:54.291158  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:54.633087  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:54.650869  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:54.652016  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:54.790255  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:55.134210  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:55.151468  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:55.152484  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:55.291016  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:55.621449  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:10:55.633273  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:55.650635  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:55.651686  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:55.791393  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:56.133120  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:56.152561  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:56.153511  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:56.291019  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:56.633178  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:56.651010  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:56.651954  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:56.790299  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:57.133283  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:57.151110  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:57.151730  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:57.290763  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:57.633571  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:57.650970  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:57.651955  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:57.790643  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:58.117258  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:10:58.134065  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:58.151344  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:58.152074  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:58.290299  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:58.633207  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:58.650958  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:58.652986  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:58.790162  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:59.133507  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:59.151542  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:59.151764  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:59.291278  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:10:59.632501  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:10:59.650778  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:10:59.651348  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:10:59.791501  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:00.130182  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:00.138461  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:00.161701  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:00.166228  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:00.295187  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:00.633317  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:00.650820  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:00.651831  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:00.790801  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:01.132825  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:01.150720  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:01.153569  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:01.292495  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:01.633290  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:01.650969  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:01.651583  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:01.791317  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:02.133757  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:02.151005  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:02.152746  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:02.290972  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:02.618460  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:02.632715  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:02.650084  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:02.651771  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:02.791005  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:03.132822  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:03.150781  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:03.151789  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:03.291004  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:03.633087  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:03.650306  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:03.652498  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:03.791039  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:04.133959  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:04.152086  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:04.153252  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:04.290130  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:04.621317  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:04.633457  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:04.650684  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:04.651449  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:04.790986  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:05.133086  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:05.151858  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:05.152863  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:05.291346  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:05.633535  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:05.649874  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:05.651166  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:05.790893  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:06.133167  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:06.150845  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:06.151693  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:06.291637  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:06.634193  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:06.651004  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:06.651815  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:06.790261  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:07.117239  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:07.133405  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:07.152537  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:07.152899  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:07.291072  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:07.633458  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:07.651351  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:07.652466  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:07.790983  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:08.133891  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:08.151835  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:08.151925  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:08.291445  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:08.633387  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:08.650467  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:08.651850  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:08.791224  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:09.117316  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:09.133151  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:09.151104  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:09.152469  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:09.290874  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:09.633873  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:09.650235  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:09.651643  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:09.791293  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:10.133262  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:10.150035  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:10.151826  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:10.290362  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:10.633166  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:10.651032  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:10.651519  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:10.790321  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:11.117527  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:11.133577  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:11.151343  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:11.151975  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:11.291930  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:11.633683  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:11.650202  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:11.651946  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:11.790802  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:12.132841  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:12.150968  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:12.151501  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:12.290283  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:12.633401  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:12.650056  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:12.651481  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:12.795265  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:13.133152  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:13.151700  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:13.151893  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:13.290879  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:13.618446  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:13.632737  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:13.651016  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:13.652129  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:13.790643  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:14.133388  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:14.152543  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:14.153005  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:14.290859  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:14.633465  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:14.650169  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:14.651870  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:14.790279  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:15.133280  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:15.151045  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:15.151654  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:15.290173  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:15.632903  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:15.650037  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:15.651582  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:15.791107  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:16.117110  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:16.133578  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:16.151790  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:16.152562  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:16.290920  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:16.632963  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:16.651010  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:16.651842  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:16.791347  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:17.133532  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:17.150853  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:17.152080  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:17.290893  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:17.633185  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:17.650337  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:17.652286  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:17.790907  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:18.133352  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:18.151642  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:18.152214  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:18.291125  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:18.617448  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:18.632649  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:18.651966  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:18.652549  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:18.790218  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:19.133657  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:19.151794  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:19.152473  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:19.290604  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:19.633886  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:19.651258  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:19.651975  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:19.790134  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:20.133461  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:20.149775  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:20.151713  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:20.291105  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:20.619898  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:20.633243  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:20.651500  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:20.652334  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:20.790870  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:21.133134  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:21.152460  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:21.152732  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:21.291491  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:21.633341  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:21.651789  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:21.652072  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:21.791029  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:22.133922  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:22.150548  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:22.151316  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:22.291003  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:22.621805  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:22.633154  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:22.651044  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:22.652056  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:22.790626  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:23.133680  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:23.151462  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:23.152249  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:23.290930  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:23.633775  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:23.650409  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:23.652325  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:23.790843  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:24.133979  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:24.152931  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:24.153292  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:24.290613  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:24.633112  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:24.651490  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:24.651815  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:24.791404  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:25.117257  305308 node_ready.go:53] node "addons-483552" has status "Ready":"False"
	I0120 18:11:25.134219  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:25.150525  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:25.152596  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:25.291250  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:25.633818  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:25.650572  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:25.652363  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:25.790850  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:26.117485  305308 node_ready.go:49] node "addons-483552" has status "Ready":"True"
	I0120 18:11:26.117513  305308 node_ready.go:38] duration metric: took 39.504084024s for node "addons-483552" to be "Ready" ...
	I0120 18:11:26.117523  305308 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 18:11:26.142154  305308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-7pl9p" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:26.167328  305308 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0120 18:11:26.167405  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:26.280172  305308 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0120 18:11:26.280246  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:26.281848  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:26.328267  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:26.644251  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:26.747422  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:26.747956  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:26.858954  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:27.135143  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:27.152400  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:27.152674  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:27.291273  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:27.634858  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:27.650502  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:27.652550  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:27.792087  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:28.135062  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:28.149172  305308 pod_ready.go:93] pod "coredns-668d6bf9bc-7pl9p" in "kube-system" namespace has status "Ready":"True"
	I0120 18:11:28.149250  305308 pod_ready.go:82] duration metric: took 2.00704871s for pod "coredns-668d6bf9bc-7pl9p" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.149290  305308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-483552" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.152277  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:28.153303  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:28.156674  305308 pod_ready.go:93] pod "etcd-addons-483552" in "kube-system" namespace has status "Ready":"True"
	I0120 18:11:28.156698  305308 pod_ready.go:82] duration metric: took 7.373533ms for pod "etcd-addons-483552" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.156713  305308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-483552" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.161907  305308 pod_ready.go:93] pod "kube-apiserver-addons-483552" in "kube-system" namespace has status "Ready":"True"
	I0120 18:11:28.161933  305308 pod_ready.go:82] duration metric: took 5.212122ms for pod "kube-apiserver-addons-483552" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.161944  305308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-483552" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.168698  305308 pod_ready.go:93] pod "kube-controller-manager-addons-483552" in "kube-system" namespace has status "Ready":"True"
	I0120 18:11:28.168726  305308 pod_ready.go:82] duration metric: took 6.773668ms for pod "kube-controller-manager-addons-483552" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.168742  305308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rj7jn" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.174054  305308 pod_ready.go:93] pod "kube-proxy-rj7jn" in "kube-system" namespace has status "Ready":"True"
	I0120 18:11:28.174080  305308 pod_ready.go:82] duration metric: took 5.329787ms for pod "kube-proxy-rj7jn" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.174092  305308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-483552" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.290240  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:28.546978  305308 pod_ready.go:93] pod "kube-scheduler-addons-483552" in "kube-system" namespace has status "Ready":"True"
	I0120 18:11:28.547059  305308 pod_ready.go:82] duration metric: took 372.957814ms for pod "kube-scheduler-addons-483552" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.547087  305308 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:28.633996  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:28.652147  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:28.652857  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:28.794364  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:29.135528  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:29.152278  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:29.153535  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:29.291166  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:29.634357  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:29.652863  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:29.654219  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:29.791335  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:30.135986  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:30.159243  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:30.162013  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:30.291832  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:30.560098  305308 pod_ready.go:103] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:30.635074  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:30.652286  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:30.654753  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:30.791855  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:31.134815  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:31.154408  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:31.154572  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:31.291221  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:31.634472  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:31.653362  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:31.654500  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:31.790975  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:32.137291  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:32.152166  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:32.155688  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:32.292400  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:32.635525  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:32.652802  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:32.655235  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:32.790907  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:33.054699  305308 pod_ready.go:103] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:33.136480  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:33.153223  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:33.153728  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:33.291433  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:33.635699  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:33.653552  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:33.655564  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:33.791409  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:34.136722  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:34.153608  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:34.154925  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:34.291047  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:34.634926  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:34.651007  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:34.653102  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:34.791843  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:35.135589  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:35.150705  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:35.153839  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:35.291143  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:35.555474  305308 pod_ready.go:103] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:35.636724  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:35.657360  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:35.658946  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:35.791126  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:36.137893  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:36.156129  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:36.157748  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:36.291213  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:36.635161  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:36.653083  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:36.656830  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:36.791847  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:37.135346  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:37.153198  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:37.153424  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:37.290806  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:37.638088  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:37.653299  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:37.654768  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:37.791632  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:38.055406  305308 pod_ready.go:103] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:38.136114  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:38.159032  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:38.163010  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:38.290994  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:38.634259  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:38.651419  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:38.652430  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:38.791172  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:39.134607  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:39.151516  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:39.152622  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:39.290898  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:39.635133  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:39.651120  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:39.652566  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:39.790981  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:40.143424  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:40.153308  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:40.153990  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:40.290806  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:40.554162  305308 pod_ready.go:103] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:40.634867  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:40.651000  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:40.653663  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:40.791066  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:41.134427  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:41.152127  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:41.152724  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:41.290850  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:41.633696  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:41.661128  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:41.662395  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:41.790975  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:42.134552  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:42.154860  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:42.155561  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:42.292002  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:42.641613  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:42.662444  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:42.665296  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:42.790813  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:43.055961  305308 pod_ready.go:103] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:43.139128  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:43.154227  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:43.155108  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:43.290570  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:43.634300  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:43.657337  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:43.659705  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:43.791456  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:44.134432  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:44.151255  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:44.152888  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:44.291293  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:44.637145  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:44.654985  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:44.655865  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:44.791489  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:45.056621  305308 pod_ready.go:103] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:45.138445  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:45.156686  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:45.158262  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:45.291252  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:45.634970  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:45.657213  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:45.658365  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:45.791916  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:46.136695  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:46.152811  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:46.153837  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:46.291816  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:46.677805  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:46.680107  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:46.680999  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:46.792410  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:47.135026  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:47.150720  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:47.154801  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:47.291938  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:47.560626  305308 pod_ready.go:103] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:47.642673  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:47.742846  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:47.744395  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:47.791146  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:48.136166  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:48.154076  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:48.156869  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:48.291610  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:48.635469  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:48.654262  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:48.655188  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:48.790887  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:49.135418  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:49.155457  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:49.156545  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:49.292921  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:49.568458  305308 pod_ready.go:103] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:49.643229  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:49.691442  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:49.691705  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:49.831793  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:50.055662  305308 pod_ready.go:93] pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace has status "Ready":"True"
	I0120 18:11:50.055700  305308 pod_ready.go:82] duration metric: took 21.508598751s for pod "metrics-server-7fbb699795-l78fs" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:50.055712  305308 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace to be "Ready" ...
	I0120 18:11:50.142921  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:50.155644  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:50.161525  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:50.290858  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:50.640142  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:50.666468  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:50.667802  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:50.790740  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:51.134904  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:51.167453  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:51.168810  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:51.291465  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:51.638382  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:51.665188  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:51.666794  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:51.792594  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:52.062654  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:52.134543  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:52.152851  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:52.154257  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:52.291133  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:52.636240  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:52.656371  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:52.656952  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:52.790976  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:53.135890  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:53.150521  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:53.152543  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:53.291097  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:53.634703  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:53.651420  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:53.652414  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:53.792172  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:54.072799  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:54.134978  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:54.151696  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:54.153925  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:54.290849  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:54.638760  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:54.661430  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:54.663156  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:54.791109  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:55.135644  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:55.152132  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:55.152384  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:55.290951  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:55.635097  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:55.650239  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:55.651624  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:55.791854  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:56.136237  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:56.153622  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:56.155404  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:56.290838  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:56.561872  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:56.634634  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:56.651682  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:56.652062  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:56.791628  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:57.134320  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:57.151526  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:57.152497  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:57.291181  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:57.635233  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:57.651137  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:57.654105  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:57.793093  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:58.136263  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:58.152263  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:58.153300  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:58.291072  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:58.568169  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:11:58.635016  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:58.652984  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:58.653887  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:58.791838  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:59.134800  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:59.150284  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:59.152672  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:59.301090  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:11:59.634680  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:11:59.652775  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:11:59.654363  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:11:59.791120  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:00.148503  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:00.156571  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:00.160766  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:00.294407  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:00.634790  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:00.651879  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:00.655418  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:00.798482  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:01.063528  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:12:01.135688  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:01.158234  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:01.162242  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:01.294462  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:01.635451  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:01.657970  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:01.720402  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:01.791704  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:02.135668  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:02.150651  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:02.152445  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:02.290681  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:02.636625  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:02.651831  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:02.652565  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:02.791417  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:03.133965  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:03.151790  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:03.152101  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:03.291287  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:03.562978  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:12:03.634197  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:03.650944  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:03.652867  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:03.791404  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:04.135983  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:04.156784  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:04.157973  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:04.291172  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:04.637067  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:04.654962  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:04.656403  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:04.790605  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:05.140552  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:05.158937  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:05.160393  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:05.291460  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:05.635952  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:05.652953  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:05.654450  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:05.792356  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:06.062071  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:12:06.134129  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:06.150890  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:06.158065  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:06.290479  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:06.634655  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:06.652375  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:06.653034  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 18:12:06.791090  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:07.134153  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:07.150799  305308 kapi.go:107] duration metric: took 1m17.004013203s to wait for kubernetes.io/minikube-addons=registry ...
	I0120 18:12:07.153079  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:07.290415  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:07.634937  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:07.652471  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:07.791940  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:08.063855  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:12:08.135438  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:08.152622  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:08.291525  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:08.635846  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:08.652505  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:08.791769  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:09.135537  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:09.153253  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:09.290819  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:09.635984  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:09.652479  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:09.791413  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:10.064079  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:12:10.135561  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:10.153101  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:10.290550  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:10.636555  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:10.654181  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:10.790925  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:11.175133  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:11.176347  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:11.294222  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:11.660165  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:11.661484  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:11.791464  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:12.064819  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:12:12.134689  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:12.153102  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:12.290944  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:12.640074  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:12.659453  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:12.791382  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:13.135591  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:13.153850  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:13.291344  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:13.636620  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:13.652288  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:13.793242  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:14.136203  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:14.153011  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:14.290788  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:14.590727  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:12:14.657266  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:14.672044  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:14.791488  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:15.152627  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:15.155368  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:15.291038  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:15.634651  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:15.651849  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:15.791276  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:16.134878  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:16.152436  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:16.291079  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:16.636423  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:16.653421  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:16.790914  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:17.063799  305308 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"False"
	I0120 18:12:17.141397  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:17.154114  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:17.291153  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:17.634687  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:17.652082  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:17.799997  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:18.063765  305308 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace has status "Ready":"True"
	I0120 18:12:18.063797  305308 pod_ready.go:82] duration metric: took 28.008073939s for pod "nvidia-device-plugin-daemonset-sbfpn" in "kube-system" namespace to be "Ready" ...
	I0120 18:12:18.063835  305308 pod_ready.go:39] duration metric: took 51.946299242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 18:12:18.063869  305308 api_server.go:52] waiting for apiserver process to appear ...
	I0120 18:12:18.063948  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 18:12:18.064044  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 18:12:18.110456  305308 cri.go:89] found id: "1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462"
	I0120 18:12:18.110481  305308 cri.go:89] found id: ""
	I0120 18:12:18.110489  305308 logs.go:282] 1 containers: [1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462]
	I0120 18:12:18.110548  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:18.115508  305308 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 18:12:18.115613  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 18:12:18.135308  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:18.153120  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:18.172040  305308 cri.go:89] found id: "c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107"
	I0120 18:12:18.172066  305308 cri.go:89] found id: ""
	I0120 18:12:18.172083  305308 logs.go:282] 1 containers: [c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107]
	I0120 18:12:18.172138  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:18.175810  305308 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 18:12:18.175890  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 18:12:18.216948  305308 cri.go:89] found id: "2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17"
	I0120 18:12:18.217023  305308 cri.go:89] found id: ""
	I0120 18:12:18.217046  305308 logs.go:282] 1 containers: [2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17]
	I0120 18:12:18.217132  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:18.220434  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 18:12:18.220512  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 18:12:18.262806  305308 cri.go:89] found id: "6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e"
	I0120 18:12:18.262884  305308 cri.go:89] found id: ""
	I0120 18:12:18.262908  305308 logs.go:282] 1 containers: [6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e]
	I0120 18:12:18.263000  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:18.266440  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 18:12:18.266553  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 18:12:18.291982  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:18.333176  305308 cri.go:89] found id: "2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd"
	I0120 18:12:18.333251  305308 cri.go:89] found id: ""
	I0120 18:12:18.333279  305308 logs.go:282] 1 containers: [2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd]
	I0120 18:12:18.333376  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:18.341067  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 18:12:18.341199  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 18:12:18.409180  305308 cri.go:89] found id: "f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d"
	I0120 18:12:18.409280  305308 cri.go:89] found id: ""
	I0120 18:12:18.409304  305308 logs.go:282] 1 containers: [f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d]
	I0120 18:12:18.409406  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:18.416197  305308 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 18:12:18.416348  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 18:12:18.479664  305308 cri.go:89] found id: "c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47"
	I0120 18:12:18.479741  305308 cri.go:89] found id: ""
	I0120 18:12:18.479763  305308 logs.go:282] 1 containers: [c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47]
	I0120 18:12:18.479850  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:18.484097  305308 logs.go:123] Gathering logs for describe nodes ...
	I0120 18:12:18.484183  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 18:12:18.634935  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:18.652403  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:18.700166  305308 logs.go:123] Gathering logs for coredns [2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17] ...
	I0120 18:12:18.700203  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17"
	I0120 18:12:18.752790  305308 logs.go:123] Gathering logs for kube-scheduler [6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e] ...
	I0120 18:12:18.752915  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e"
	I0120 18:12:18.792879  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:18.811356  305308 logs.go:123] Gathering logs for kubelet ...
	I0120 18:12:18.811385  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 18:12:18.888709  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.931782    1528 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-483552" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-483552' and this object
	W0120 18:12:18.889010  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.931833    1528 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:18.889599  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.931882    1528 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-483552" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-483552' and this object
	W0120 18:12:18.889927  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.931897    1528 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:18.890127  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988491    1528 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-483552" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-483552' and this object
	W0120 18:12:18.890375  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988543    1528 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:18.890571  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988724    1528 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-483552" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-483552' and this object
	W0120 18:12:18.891500  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988753    1528 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	I0120 18:12:18.937199  305308 logs.go:123] Gathering logs for kube-apiserver [1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462] ...
	I0120 18:12:18.937247  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462"
	I0120 18:12:19.013443  305308 logs.go:123] Gathering logs for etcd [c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107] ...
	I0120 18:12:19.013491  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107"
	I0120 18:12:19.086389  305308 logs.go:123] Gathering logs for kube-proxy [2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd] ...
	I0120 18:12:19.086436  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd"
	I0120 18:12:19.126584  305308 logs.go:123] Gathering logs for kube-controller-manager [f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d] ...
	I0120 18:12:19.126615  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d"
	I0120 18:12:19.136300  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:19.153120  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:19.208970  305308 logs.go:123] Gathering logs for kindnet [c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47] ...
	I0120 18:12:19.209009  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47"
	I0120 18:12:19.260000  305308 logs.go:123] Gathering logs for CRI-O ...
	I0120 18:12:19.260031  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 18:12:19.293340  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:19.350865  305308 logs.go:123] Gathering logs for container status ...
	I0120 18:12:19.350902  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 18:12:19.406283  305308 logs.go:123] Gathering logs for dmesg ...
	I0120 18:12:19.406315  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 18:12:19.425229  305308 out.go:358] Setting ErrFile to fd 2...
	I0120 18:12:19.425298  305308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 18:12:19.425380  305308 out.go:270] X Problems detected in kubelet:
	W0120 18:12:19.425420  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.931897    1528 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:19.425469  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988491    1528 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-483552" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-483552' and this object
	W0120 18:12:19.425505  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988543    1528 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:19.425552  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988724    1528 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-483552" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-483552' and this object
	W0120 18:12:19.425585  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988753    1528 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	I0120 18:12:19.425640  305308 out.go:358] Setting ErrFile to fd 2...
	I0120 18:12:19.425661  305308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:12:19.634821  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:19.652142  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:19.791121  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:20.133890  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:20.152972  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:20.291098  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:20.635779  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:20.652595  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:20.792492  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:21.135177  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:21.159246  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:21.292295  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:21.636505  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:21.654101  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:21.792281  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:22.135664  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:22.152651  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:22.290723  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:22.640048  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:22.658063  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:22.795404  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:23.134550  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:23.154514  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:23.292179  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:23.634442  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:23.653721  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:23.791256  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:24.136132  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:24.153095  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:24.291433  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:24.636440  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:24.654336  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:24.791278  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:25.136799  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:25.152992  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:25.291501  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:25.634869  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:25.652355  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:25.790852  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:26.134746  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:26.152987  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:26.294525  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:26.636639  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:26.657933  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:26.792417  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:27.135262  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:27.152961  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:27.290954  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:27.635420  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:27.734923  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:27.834649  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 18:12:28.134997  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:28.152323  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:28.290599  305308 kapi.go:107] duration metric: took 1m34.503524129s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0120 18:12:28.293562  305308 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-483552 cluster.
	I0120 18:12:28.296343  305308 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0120 18:12:28.299435  305308 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0120 18:12:28.635365  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:28.658002  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:29.135779  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:29.152564  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:29.426803  305308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 18:12:29.441147  305308 api_server.go:72] duration metric: took 1m45.441852018s to wait for apiserver process to appear ...
	I0120 18:12:29.441171  305308 api_server.go:88] waiting for apiserver healthz status ...
	I0120 18:12:29.441206  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 18:12:29.441262  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 18:12:29.482678  305308 cri.go:89] found id: "1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462"
	I0120 18:12:29.482755  305308 cri.go:89] found id: ""
	I0120 18:12:29.482779  305308 logs.go:282] 1 containers: [1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462]
	I0120 18:12:29.482862  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:29.486702  305308 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 18:12:29.486822  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 18:12:29.543231  305308 cri.go:89] found id: "c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107"
	I0120 18:12:29.543303  305308 cri.go:89] found id: ""
	I0120 18:12:29.543334  305308 logs.go:282] 1 containers: [c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107]
	I0120 18:12:29.543419  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:29.549894  305308 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 18:12:29.549995  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 18:12:29.612062  305308 cri.go:89] found id: "2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17"
	I0120 18:12:29.612133  305308 cri.go:89] found id: ""
	I0120 18:12:29.612156  305308 logs.go:282] 1 containers: [2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17]
	I0120 18:12:29.612237  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:29.625496  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 18:12:29.625619  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 18:12:29.642661  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:29.694677  305308 cri.go:89] found id: "6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e"
	I0120 18:12:29.694698  305308 cri.go:89] found id: ""
	I0120 18:12:29.694706  305308 logs.go:282] 1 containers: [6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e]
	I0120 18:12:29.694766  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:29.698138  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 18:12:29.698204  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 18:12:29.736538  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:29.756525  305308 cri.go:89] found id: "2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd"
	I0120 18:12:29.756598  305308 cri.go:89] found id: ""
	I0120 18:12:29.756633  305308 logs.go:282] 1 containers: [2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd]
	I0120 18:12:29.756715  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:29.760172  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 18:12:29.760289  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 18:12:29.801742  305308 cri.go:89] found id: "f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d"
	I0120 18:12:29.801834  305308 cri.go:89] found id: ""
	I0120 18:12:29.801859  305308 logs.go:282] 1 containers: [f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d]
	I0120 18:12:29.801923  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:29.805345  305308 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 18:12:29.805414  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 18:12:29.848528  305308 cri.go:89] found id: "c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47"
	I0120 18:12:29.848601  305308 cri.go:89] found id: ""
	I0120 18:12:29.848623  305308 logs.go:282] 1 containers: [c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47]
	I0120 18:12:29.848720  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:29.852630  305308 logs.go:123] Gathering logs for CRI-O ...
	I0120 18:12:29.852654  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 18:12:29.958086  305308 logs.go:123] Gathering logs for container status ...
	I0120 18:12:29.958164  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 18:12:30.036809  305308 logs.go:123] Gathering logs for describe nodes ...
	I0120 18:12:30.037019  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 18:12:30.136110  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:30.153448  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:30.236590  305308 logs.go:123] Gathering logs for etcd [c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107] ...
	I0120 18:12:30.236625  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107"
	I0120 18:12:30.307534  305308 logs.go:123] Gathering logs for coredns [2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17] ...
	I0120 18:12:30.307577  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17"
	I0120 18:12:30.368446  305308 logs.go:123] Gathering logs for kube-scheduler [6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e] ...
	I0120 18:12:30.368485  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e"
	I0120 18:12:30.425544  305308 logs.go:123] Gathering logs for kube-controller-manager [f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d] ...
	I0120 18:12:30.425577  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d"
	I0120 18:12:30.511525  305308 logs.go:123] Gathering logs for kubelet ...
	I0120 18:12:30.511563  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 18:12:30.594012  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.931782    1528 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-483552" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-483552' and this object
	W0120 18:12:30.594289  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.931833    1528 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:30.594482  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.931882    1528 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-483552" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-483552' and this object
	W0120 18:12:30.594713  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.931897    1528 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:30.594885  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988491    1528 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-483552" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-483552' and this object
	W0120 18:12:30.595096  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988543    1528 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:30.595267  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988724    1528 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-483552" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-483552' and this object
	W0120 18:12:30.595484  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988753    1528 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	I0120 18:12:30.639419  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:30.642977  305308 logs.go:123] Gathering logs for dmesg ...
	I0120 18:12:30.643027  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 18:12:30.652675  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:30.664790  305308 logs.go:123] Gathering logs for kube-apiserver [1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462] ...
	I0120 18:12:30.664868  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462"
	I0120 18:12:30.739861  305308 logs.go:123] Gathering logs for kube-proxy [2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd] ...
	I0120 18:12:30.739902  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd"
	I0120 18:12:30.779509  305308 logs.go:123] Gathering logs for kindnet [c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47] ...
	I0120 18:12:30.779540  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47"
	I0120 18:12:30.823682  305308 out.go:358] Setting ErrFile to fd 2...
	I0120 18:12:30.823708  305308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 18:12:30.823761  305308 out.go:270] X Problems detected in kubelet:
	W0120 18:12:30.823777  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.931897    1528 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:30.823784  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988491    1528 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-483552" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-483552' and this object
	W0120 18:12:30.823797  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988543    1528 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:30.823803  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988724    1528 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-483552" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-483552' and this object
	W0120 18:12:30.823810  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988753    1528 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	I0120 18:12:30.823818  305308 out.go:358] Setting ErrFile to fd 2...
	I0120 18:12:30.823824  305308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:12:31.134949  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:31.155815  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:31.637676  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:31.651873  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:32.134616  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:32.152076  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:32.635981  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:32.653029  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:33.134909  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:33.152197  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:33.643116  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:33.652423  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:34.136922  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:34.154923  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:34.634053  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:34.652691  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:35.134097  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:35.153279  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:35.635015  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:35.652405  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:36.134534  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:36.152678  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:36.637827  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:36.655700  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:37.134536  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:37.152791  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:37.637355  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:37.656533  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:38.135733  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:38.153365  305308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 18:12:38.635287  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:38.652279  305308 kapi.go:107] duration metric: took 1m48.504574811s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0120 18:12:39.134691  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:39.634983  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:40.134991  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:40.634553  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:40.824928  305308 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0120 18:12:40.833621  305308 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0120 18:12:40.834735  305308 api_server.go:141] control plane version: v1.32.0
	I0120 18:12:40.834763  305308 api_server.go:131] duration metric: took 11.39358328s to wait for apiserver health ...
	I0120 18:12:40.834773  305308 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 18:12:40.834794  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 18:12:40.834857  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 18:12:40.880496  305308 cri.go:89] found id: "1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462"
	I0120 18:12:40.880519  305308 cri.go:89] found id: ""
	I0120 18:12:40.880528  305308 logs.go:282] 1 containers: [1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462]
	I0120 18:12:40.880611  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:40.884755  305308 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 18:12:40.884845  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 18:12:40.923341  305308 cri.go:89] found id: "c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107"
	I0120 18:12:40.923364  305308 cri.go:89] found id: ""
	I0120 18:12:40.923372  305308 logs.go:282] 1 containers: [c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107]
	I0120 18:12:40.923427  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:40.926838  305308 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 18:12:40.926907  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 18:12:40.964798  305308 cri.go:89] found id: "2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17"
	I0120 18:12:40.964867  305308 cri.go:89] found id: ""
	I0120 18:12:40.964890  305308 logs.go:282] 1 containers: [2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17]
	I0120 18:12:40.964973  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:40.969450  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 18:12:40.969520  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 18:12:41.017500  305308 cri.go:89] found id: "6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e"
	I0120 18:12:41.017591  305308 cri.go:89] found id: ""
	I0120 18:12:41.017616  305308 logs.go:282] 1 containers: [6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e]
	I0120 18:12:41.017698  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:41.021346  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 18:12:41.021419  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 18:12:41.068931  305308 cri.go:89] found id: "2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd"
	I0120 18:12:41.068951  305308 cri.go:89] found id: ""
	I0120 18:12:41.068961  305308 logs.go:282] 1 containers: [2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd]
	I0120 18:12:41.069017  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:41.072631  305308 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 18:12:41.072752  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 18:12:41.117881  305308 cri.go:89] found id: "f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d"
	I0120 18:12:41.117951  305308 cri.go:89] found id: ""
	I0120 18:12:41.117966  305308 logs.go:282] 1 containers: [f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d]
	I0120 18:12:41.118024  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:41.121606  305308 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 18:12:41.121725  305308 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 18:12:41.136214  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:41.161307  305308 cri.go:89] found id: "c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47"
	I0120 18:12:41.161340  305308 cri.go:89] found id: ""
	I0120 18:12:41.161348  305308 logs.go:282] 1 containers: [c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47]
	I0120 18:12:41.161434  305308 ssh_runner.go:195] Run: which crictl
	I0120 18:12:41.165092  305308 logs.go:123] Gathering logs for kubelet ...
	I0120 18:12:41.165119  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 18:12:41.237559  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.931782    1528 reflector.go:569] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-483552" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-483552' and this object
	W0120 18:12:41.237839  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.931833    1528 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:41.238031  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.931882    1528 reflector.go:569] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-483552" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-483552' and this object
	W0120 18:12:41.238264  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.931897    1528 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:41.238438  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988491    1528 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-483552" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-483552' and this object
	W0120 18:12:41.238651  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988543    1528 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:41.238826  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988724    1528 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-483552" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-483552' and this object
	W0120 18:12:41.239039  305308 logs.go:138] Found kubelet problem: Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988753    1528 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	I0120 18:12:41.282217  305308 logs.go:123] Gathering logs for describe nodes ...
	I0120 18:12:41.282246  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 18:12:41.410692  305308 logs.go:123] Gathering logs for kube-scheduler [6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e] ...
	I0120 18:12:41.410724  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e"
	I0120 18:12:41.457509  305308 logs.go:123] Gathering logs for kube-proxy [2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd] ...
	I0120 18:12:41.457540  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd"
	I0120 18:12:41.498468  305308 logs.go:123] Gathering logs for kindnet [c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47] ...
	I0120 18:12:41.498494  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47"
	I0120 18:12:41.551361  305308 logs.go:123] Gathering logs for CRI-O ...
	I0120 18:12:41.551440  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 18:12:41.634804  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:41.666687  305308 logs.go:123] Gathering logs for container status ...
	I0120 18:12:41.666721  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 18:12:41.740810  305308 logs.go:123] Gathering logs for dmesg ...
	I0120 18:12:41.740924  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 18:12:41.766830  305308 logs.go:123] Gathering logs for kube-apiserver [1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462] ...
	I0120 18:12:41.766913  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462"
	I0120 18:12:41.868183  305308 logs.go:123] Gathering logs for etcd [c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107] ...
	I0120 18:12:41.868228  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107"
	I0120 18:12:41.985629  305308 logs.go:123] Gathering logs for coredns [2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17] ...
	I0120 18:12:41.985667  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17"
	I0120 18:12:42.085018  305308 logs.go:123] Gathering logs for kube-controller-manager [f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d] ...
	I0120 18:12:42.085065  305308 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d"
	I0120 18:12:42.136687  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:42.239560  305308 out.go:358] Setting ErrFile to fd 2...
	I0120 18:12:42.239595  305308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 18:12:42.239652  305308 out.go:270] X Problems detected in kubelet:
	W0120 18:12:42.239669  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.931897    1528 reflector.go:166] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:42.239685  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988491    1528 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-483552" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-483552' and this object
	W0120 18:12:42.239695  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988543    1528 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	W0120 18:12:42.239709  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: W0120 18:11:25.988724    1528 reflector.go:569] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-483552" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-483552' and this object
	W0120 18:12:42.239715  305308 out.go:270]   Jan 20 18:11:25 addons-483552 kubelet[1528]: E0120 18:11:25.988753    1528 reflector.go:166] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-483552\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-483552' and this object" logger="UnhandledError"
	I0120 18:12:42.239721  305308 out.go:358] Setting ErrFile to fd 2...
	I0120 18:12:42.239728  305308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:12:42.634983  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:43.134459  305308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 18:12:43.635899  305308 kapi.go:107] duration metric: took 1m53.00647311s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0120 18:12:43.638954  305308 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0120 18:12:43.641685  305308 addons.go:514] duration metric: took 1m59.641940928s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher storage-provisioner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0120 18:12:52.253035  305308 system_pods.go:59] 18 kube-system pods found
	I0120 18:12:52.253144  305308 system_pods.go:61] "coredns-668d6bf9bc-7pl9p" [797c2fd4-f111-46d3-8573-3e8a171f82ab] Running
	I0120 18:12:52.253160  305308 system_pods.go:61] "csi-hostpath-attacher-0" [ee817f01-d518-4113-b957-fb0622fbd6fb] Running
	I0120 18:12:52.253166  305308 system_pods.go:61] "csi-hostpath-resizer-0" [5cbc911f-a8a1-4b9a-8798-e28312bd0b5b] Running
	I0120 18:12:52.253171  305308 system_pods.go:61] "csi-hostpathplugin-gnz4h" [f1a3f1de-3577-4aea-8b5b-0f1f7a5d6b2f] Running
	I0120 18:12:52.253175  305308 system_pods.go:61] "etcd-addons-483552" [6fca21f0-4dbc-409e-8da4-028fc9f6d3b6] Running
	I0120 18:12:52.253179  305308 system_pods.go:61] "kindnet-xh7z7" [603fe4cc-461d-4544-9004-7f7e90287079] Running
	I0120 18:12:52.253207  305308 system_pods.go:61] "kube-apiserver-addons-483552" [7b621d8f-ec59-4cbe-8d51-212f1a91c1d1] Running
	I0120 18:12:52.253212  305308 system_pods.go:61] "kube-controller-manager-addons-483552" [cc0eb3cd-aaa8-487f-95f6-7d0e298ef731] Running
	I0120 18:12:52.253217  305308 system_pods.go:61] "kube-ingress-dns-minikube" [5bafaf3f-0190-4b98-9bf3-5f107f06241b] Running
	I0120 18:12:52.253222  305308 system_pods.go:61] "kube-proxy-rj7jn" [c832d07e-e166-484a-96e1-4230aa4c4794] Running
	I0120 18:12:52.253226  305308 system_pods.go:61] "kube-scheduler-addons-483552" [78fb5e73-7c18-40e6-9e20-edb174915bca] Running
	I0120 18:12:52.253246  305308 system_pods.go:61] "metrics-server-7fbb699795-l78fs" [24ef1eeb-9b7c-42cc-a2e9-ba2c03c3cb4c] Running
	I0120 18:12:52.253251  305308 system_pods.go:61] "nvidia-device-plugin-daemonset-sbfpn" [1873cb8c-d69f-4e43-b297-4784a8e6b0c1] Running
	I0120 18:12:52.253255  305308 system_pods.go:61] "registry-6c86875c6f-8cc5t" [11eece58-b31f-45d9-9831-26bd887b6621] Running
	I0120 18:12:52.253263  305308 system_pods.go:61] "registry-proxy-q7m8l" [b119f04c-b7e8-4eba-926b-814b9001158d] Running
	I0120 18:12:52.253268  305308 system_pods.go:61] "snapshot-controller-68b874b76f-m6s89" [9c3aa13c-1872-4ebb-bc4a-5302095397a6] Running
	I0120 18:12:52.253281  305308 system_pods.go:61] "snapshot-controller-68b874b76f-qgcjp" [2efdff3b-4011-4d3e-82e5-5da13bceceb5] Running
	I0120 18:12:52.253285  305308 system_pods.go:61] "storage-provisioner" [d9bda460-c6ca-4b17-b61e-d502c435486d] Running
	I0120 18:12:52.253291  305308 system_pods.go:74] duration metric: took 11.418511508s to wait for pod list to return data ...
	I0120 18:12:52.253313  305308 default_sa.go:34] waiting for default service account to be created ...
	I0120 18:12:52.255532  305308 default_sa.go:45] found service account: "default"
	I0120 18:12:52.255560  305308 default_sa.go:55] duration metric: took 2.231659ms for default service account to be created ...
	I0120 18:12:52.255576  305308 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 18:12:52.265543  305308 system_pods.go:87] 18 kube-system pods found
	I0120 18:12:52.268938  305308 system_pods.go:105] "coredns-668d6bf9bc-7pl9p" [797c2fd4-f111-46d3-8573-3e8a171f82ab] Running
	I0120 18:12:52.268969  305308 system_pods.go:105] "csi-hostpath-attacher-0" [ee817f01-d518-4113-b957-fb0622fbd6fb] Running
	I0120 18:12:52.268976  305308 system_pods.go:105] "csi-hostpath-resizer-0" [5cbc911f-a8a1-4b9a-8798-e28312bd0b5b] Running
	I0120 18:12:52.268981  305308 system_pods.go:105] "csi-hostpathplugin-gnz4h" [f1a3f1de-3577-4aea-8b5b-0f1f7a5d6b2f] Running
	I0120 18:12:52.268986  305308 system_pods.go:105] "etcd-addons-483552" [6fca21f0-4dbc-409e-8da4-028fc9f6d3b6] Running
	I0120 18:12:52.268991  305308 system_pods.go:105] "kindnet-xh7z7" [603fe4cc-461d-4544-9004-7f7e90287079] Running
	I0120 18:12:52.268996  305308 system_pods.go:105] "kube-apiserver-addons-483552" [7b621d8f-ec59-4cbe-8d51-212f1a91c1d1] Running
	I0120 18:12:52.269001  305308 system_pods.go:105] "kube-controller-manager-addons-483552" [cc0eb3cd-aaa8-487f-95f6-7d0e298ef731] Running
	I0120 18:12:52.269006  305308 system_pods.go:105] "kube-ingress-dns-minikube" [5bafaf3f-0190-4b98-9bf3-5f107f06241b] Running
	I0120 18:12:52.269011  305308 system_pods.go:105] "kube-proxy-rj7jn" [c832d07e-e166-484a-96e1-4230aa4c4794] Running
	I0120 18:12:52.269016  305308 system_pods.go:105] "kube-scheduler-addons-483552" [78fb5e73-7c18-40e6-9e20-edb174915bca] Running
	I0120 18:12:52.269022  305308 system_pods.go:105] "metrics-server-7fbb699795-l78fs" [24ef1eeb-9b7c-42cc-a2e9-ba2c03c3cb4c] Running
	I0120 18:12:52.269037  305308 system_pods.go:105] "nvidia-device-plugin-daemonset-sbfpn" [1873cb8c-d69f-4e43-b297-4784a8e6b0c1] Running
	I0120 18:12:52.269042  305308 system_pods.go:105] "registry-6c86875c6f-8cc5t" [11eece58-b31f-45d9-9831-26bd887b6621] Running
	I0120 18:12:52.269052  305308 system_pods.go:105] "registry-proxy-q7m8l" [b119f04c-b7e8-4eba-926b-814b9001158d] Running
	I0120 18:12:52.269057  305308 system_pods.go:105] "snapshot-controller-68b874b76f-m6s89" [9c3aa13c-1872-4ebb-bc4a-5302095397a6] Running
	I0120 18:12:52.269062  305308 system_pods.go:105] "snapshot-controller-68b874b76f-qgcjp" [2efdff3b-4011-4d3e-82e5-5da13bceceb5] Running
	I0120 18:12:52.269071  305308 system_pods.go:105] "storage-provisioner" [d9bda460-c6ca-4b17-b61e-d502c435486d] Running
	I0120 18:12:52.269079  305308 system_pods.go:147] duration metric: took 13.496957ms to wait for k8s-apps to be running ...
	I0120 18:12:52.269095  305308 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 18:12:52.269150  305308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 18:12:52.281110  305308 system_svc.go:56] duration metric: took 12.004935ms WaitForService to wait for kubelet
	I0120 18:12:52.281182  305308 kubeadm.go:582] duration metric: took 2m8.281892116s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 18:12:52.281207  305308 node_conditions.go:102] verifying NodePressure condition ...
	I0120 18:12:52.284728  305308 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0120 18:12:52.284765  305308 node_conditions.go:123] node cpu capacity is 2
	I0120 18:12:52.284779  305308 node_conditions.go:105] duration metric: took 3.565129ms to run NodePressure ...
	I0120 18:12:52.284790  305308 start.go:241] waiting for startup goroutines ...
	I0120 18:12:52.284798  305308 start.go:246] waiting for cluster config update ...
	I0120 18:12:52.284815  305308 start.go:255] writing updated cluster config ...
	I0120 18:12:52.285136  305308 ssh_runner.go:195] Run: rm -f paused
	I0120 18:12:52.689221  305308 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 18:12:52.692658  305308 out.go:177] * Done! kubectl is now configured to use "addons-483552" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 20 18:15:39 addons-483552 crio[980]: time="2025-01-20 18:15:39.317571678Z" level=info msg="Removed pod sandbox: f9f4cbb4d387aa15f03e855582fd409202b5ace9282cb132d51ac09059b5cf52" id=19d964ba-5dce-4f70-a9bd-490770de462e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.354552343Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-rw2hh/POD" id=135a17da-88c6-4f96-90d6-1b4713deb12f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.354611345Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.398713026Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-rw2hh Namespace:default ID:5076c4fc6cde4da70ef5af6f76061195bc7642f8895864f61e4ed6463d8269ed UID:ec296f41-614d-4cd5-a40a-a33daaa16e94 NetNS:/var/run/netns/e2149168-85fb-4c68-acba-489a995dba4a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.398779199Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-rw2hh to CNI network \"kindnet\" (type=ptp)"
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.412677090Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-rw2hh Namespace:default ID:5076c4fc6cde4da70ef5af6f76061195bc7642f8895864f61e4ed6463d8269ed UID:ec296f41-614d-4cd5-a40a-a33daaa16e94 NetNS:/var/run/netns/e2149168-85fb-4c68-acba-489a995dba4a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.412853167Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-rw2hh for CNI network kindnet (type=ptp)"
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.421565347Z" level=info msg="Ran pod sandbox 5076c4fc6cde4da70ef5af6f76061195bc7642f8895864f61e4ed6463d8269ed with infra container: default/hello-world-app-7d9564db4-rw2hh/POD" id=135a17da-88c6-4f96-90d6-1b4713deb12f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.423041491Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ff3d59b9-bb5c-4cd1-b81e-3874a3a9230a name=/runtime.v1.ImageService/ImageStatus
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.423310078Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ff3d59b9-bb5c-4cd1-b81e-3874a3a9230a name=/runtime.v1.ImageService/ImageStatus
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.426164712Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=07e4628a-a137-4b9c-aec9-634982adcd3e name=/runtime.v1.ImageService/PullImage
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.428961140Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Jan 20 18:16:10 addons-483552 crio[980]: time="2025-01-20 18:16:10.674863805Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.469922524Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=07e4628a-a137-4b9c-aec9-634982adcd3e name=/runtime.v1.ImageService/PullImage
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.470753484Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c1c79ced-85ca-4f63-b754-43a744b7533f name=/runtime.v1.ImageService/ImageStatus
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.471405896Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c1c79ced-85ca-4f63-b754-43a744b7533f name=/runtime.v1.ImageService/ImageStatus
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.472445441Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8b293d50-87d8-4b55-8f43-794e80d0801b name=/runtime.v1.ImageService/ImageStatus
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.473062778Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8b293d50-87d8-4b55-8f43-794e80d0801b name=/runtime.v1.ImageService/ImageStatus
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.473895477Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-rw2hh/hello-world-app" id=5e1f2d9b-ccba-46a7-b753-24806b9a02da name=/runtime.v1.RuntimeService/CreateContainer
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.473993157Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.502028392Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4924f41b39f2cb6dce24a6c9b8c564ef4440123a0902b6af402f1617be2f0565/merged/etc/passwd: no such file or directory"
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.502074733Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4924f41b39f2cb6dce24a6c9b8c564ef4440123a0902b6af402f1617be2f0565/merged/etc/group: no such file or directory"
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.562563158Z" level=info msg="Created container ee539999398092b36e760fab6b8a69f4fbcfbe503e1515a487afdeb832fff60c: default/hello-world-app-7d9564db4-rw2hh/hello-world-app" id=5e1f2d9b-ccba-46a7-b753-24806b9a02da name=/runtime.v1.RuntimeService/CreateContainer
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.563307933Z" level=info msg="Starting container: ee539999398092b36e760fab6b8a69f4fbcfbe503e1515a487afdeb832fff60c" id=6ba4394d-4df1-4adb-ae2c-eecbec6015d8 name=/runtime.v1.RuntimeService/StartContainer
	Jan 20 18:16:11 addons-483552 crio[980]: time="2025-01-20 18:16:11.577472468Z" level=info msg="Started container" PID=8726 containerID=ee539999398092b36e760fab6b8a69f4fbcfbe503e1515a487afdeb832fff60c description=default/hello-world-app-7d9564db4-rw2hh/hello-world-app id=6ba4394d-4df1-4adb-ae2c-eecbec6015d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5076c4fc6cde4da70ef5af6f76061195bc7642f8895864f61e4ed6463d8269ed
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	ee53999939809       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   5076c4fc6cde4       hello-world-app-7d9564db4-rw2hh
	23a34cc0eb37c       docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10                              2 minutes ago            Running             nginx                     0                   0edd461fc73bb       nginx
	9fd5e487fb5fd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   ef117fc115f85       busybox
	6bd86894a8929       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             3 minutes ago            Running             controller                0                   4006354839d75       ingress-nginx-controller-56d7c84fd4-4hqdp
	ee4d819901d45       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   3 minutes ago            Exited              patch                     0                   487e78b1faa0f       ingress-nginx-admission-patch-jkpt5
	4c83a73f49b88       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   3 minutes ago            Exited              create                    0                   60a396ba70308       ingress-nginx-admission-create-jrgcw
	aef01f322299a       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             4 minutes ago            Running             minikube-ingress-dns      0                   33e7a7bd948a6       kube-ingress-dns-minikube
	2a0aba411e96a       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             4 minutes ago            Running             coredns                   0                   18ef0a6f06481       coredns-668d6bf9bc-7pl9p
	86813762591e0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago            Running             storage-provisioner       0                   e0beb5317b239       storage-provisioner
	2b7ceafc84246       2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67                                                             5 minutes ago            Running             kube-proxy                0                   bf15aaa1683ea       kube-proxy-rj7jn
	c644cc930f8a0       2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903                                                             5 minutes ago            Running             kindnet-cni               0                   c6dec5a5d5602       kindnet-xh7z7
	f8ab10cd574bd       a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c                                                             5 minutes ago            Running             kube-controller-manager   0                   0dc4ed034f06e       kube-controller-manager-addons-483552
	6ad3359fc83c3       c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d                                                             5 minutes ago            Running             kube-scheduler            0                   6b8e6a4437e79       kube-scheduler-addons-483552
	1981feda8abc3       2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc                                                             5 minutes ago            Running             kube-apiserver            0                   a13f05ba055de       kube-apiserver-addons-483552
	c0af55ef798bb       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             5 minutes ago            Running             etcd                      0                   13af5098b5b7d       etcd-addons-483552
	
	
	==> coredns [2a0aba411e96a5025cf1e9b00a8974fff3960bb7af97638f03477ca8f8ca4c17] <==
	[INFO] 10.244.0.12:35301 - 42662 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002551793s
	[INFO] 10.244.0.12:35301 - 40813 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000111241s
	[INFO] 10.244.0.12:35301 - 52490 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000193519s
	[INFO] 10.244.0.12:52726 - 36472 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000261981s
	[INFO] 10.244.0.12:52726 - 36677 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000328071s
	[INFO] 10.244.0.12:45757 - 63627 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124697s
	[INFO] 10.244.0.12:45757 - 63430 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000198213s
	[INFO] 10.244.0.12:49198 - 64747 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000163211s
	[INFO] 10.244.0.12:49198 - 64318 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000120612s
	[INFO] 10.244.0.12:48550 - 63101 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004241739s
	[INFO] 10.244.0.12:48550 - 62920 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006376724s
	[INFO] 10.244.0.12:56707 - 16880 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000273s
	[INFO] 10.244.0.12:56707 - 16451 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000281345s
	[INFO] 10.244.0.20:48738 - 27202 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000268266s
	[INFO] 10.244.0.20:36158 - 31631 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181434s
	[INFO] 10.244.0.20:50623 - 39983 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161258s
	[INFO] 10.244.0.20:59641 - 9535 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00029421s
	[INFO] 10.244.0.20:56860 - 33124 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136874s
	[INFO] 10.244.0.20:43695 - 717 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130499s
	[INFO] 10.244.0.20:36620 - 32278 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002108624s
	[INFO] 10.244.0.20:42216 - 17035 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001827042s
	[INFO] 10.244.0.20:44809 - 61221 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001701901s
	[INFO] 10.244.0.20:33898 - 3534 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003501794s
	[INFO] 10.244.0.24:37051 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000195088s
	[INFO] 10.244.0.24:60314 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000135635s
	
	
	==> describe nodes <==
	Name:               addons-483552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-483552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
	                    minikube.k8s.io/name=addons-483552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T18_10_39_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-483552
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 18:10:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-483552
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 18:16:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 18:15:14 +0000   Mon, 20 Jan 2025 18:10:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 18:15:14 +0000   Mon, 20 Jan 2025 18:10:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 18:15:14 +0000   Mon, 20 Jan 2025 18:10:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 18:15:14 +0000   Mon, 20 Jan 2025 18:11:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-483552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 034cf2813f8a4a05977016746741fefa
	  System UUID:                82021dd2-fe03-456a-bf84-9e360ac6b650
	  Boot ID:                    b8c3612f-8bbe-4374-bf0d-c53be6541566
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  default                     hello-world-app-7d9564db4-rw2hh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-4hqdp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m23s
	  kube-system                 coredns-668d6bf9bc-7pl9p                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m29s
	  kube-system                 etcd-addons-483552                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m33s
	  kube-system                 kindnet-xh7z7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m29s
	  kube-system                 kube-apiserver-addons-483552                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-controller-manager-addons-483552        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-proxy-rj7jn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-scheduler-addons-483552                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m22s                  kube-proxy       
	  Normal   Starting                 5m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node addons-483552 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node addons-483552 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m41s (x8 over 5m41s)  kubelet          Node addons-483552 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m34s                  kubelet          Node addons-483552 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m34s                  kubelet          Node addons-483552 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m34s                  kubelet          Node addons-483552 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m30s                  node-controller  Node addons-483552 event: Registered Node addons-483552 in Controller
	  Normal   NodeReady                4m47s                  kubelet          Node addons-483552 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan20 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014413] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511214] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032926] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.796282] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.026545] kauditd_printk_skb: 36 callbacks suppressed
	[Jan20 17:00] hrtimer: interrupt took 4540795 ns
	[Jan20 17:37] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [c0af55ef798bb002b69ed249f0155715811e3a521d1d62396b58ce4515fd5107] <==
	{"level":"info","ts":"2025-01-20T18:10:33.114279Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T18:10:33.114677Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T18:10:33.115358Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T18:10:33.116318Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-20T18:10:33.116880Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T18:10:33.125170Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-01-20T18:10:33.125833Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-20T18:10:33.125874Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-20T18:10:33.125853Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T18:10:33.126011Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T18:10:33.126063Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T18:10:45.517295Z","caller":"traceutil/trace.go:171","msg":"trace[478414706] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"152.192946ms","start":"2025-01-20T18:10:45.365086Z","end":"2025-01-20T18:10:45.517279Z","steps":["trace[478414706] 'process raft request'  (duration: 151.947392ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T18:10:46.863751Z","caller":"traceutil/trace.go:171","msg":"trace[2118348306] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"144.803167ms","start":"2025-01-20T18:10:46.718801Z","end":"2025-01-20T18:10:46.863732Z","steps":["trace[2118348306] 'process raft request'  (duration: 139.079785ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T18:10:47.293047Z","caller":"traceutil/trace.go:171","msg":"trace[115098696] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"105.244455ms","start":"2025-01-20T18:10:47.187783Z","end":"2025-01-20T18:10:47.293028Z","steps":["trace[115098696] 'process raft request'  (duration: 104.699498ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T18:10:47.293318Z","caller":"traceutil/trace.go:171","msg":"trace[1098380273] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"101.553526ms","start":"2025-01-20T18:10:47.191754Z","end":"2025-01-20T18:10:47.293308Z","steps":["trace[1098380273] 'process raft request'  (duration: 100.928498ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T18:10:47.309049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.92043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T18:10:47.309121Z","caller":"traceutil/trace.go:171","msg":"trace[1079322094] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:403; }","duration":"108.016008ms","start":"2025-01-20T18:10:47.201090Z","end":"2025-01-20T18:10:47.309106Z","steps":["trace[1079322094] 'agreement among raft nodes before linearized reading'  (duration: 92.42686ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T18:10:47.338091Z","caller":"traceutil/trace.go:171","msg":"trace[1911389331] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"101.418302ms","start":"2025-01-20T18:10:47.236652Z","end":"2025-01-20T18:10:47.338070Z","steps":["trace[1911389331] 'process raft request'  (duration: 97.295158ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T18:10:47.338234Z","caller":"traceutil/trace.go:171","msg":"trace[1609325066] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"100.22897ms","start":"2025-01-20T18:10:47.237995Z","end":"2025-01-20T18:10:47.338224Z","steps":["trace[1609325066] 'process raft request'  (duration: 96.023868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T18:10:47.338484Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.876313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T18:10:47.348174Z","caller":"traceutil/trace.go:171","msg":"trace[1171762997] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:406; }","duration":"111.571237ms","start":"2025-01-20T18:10:47.236581Z","end":"2025-01-20T18:10:47.348153Z","steps":["trace[1171762997] 'agreement among raft nodes before linearized reading'  (duration: 101.817164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T18:10:47.338561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.72838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-483552\" limit:1 ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2025-01-20T18:10:47.348922Z","caller":"traceutil/trace.go:171","msg":"trace[1345087388] range","detail":"{range_begin:/registry/minions/addons-483552; range_end:; response_count:1; response_revision:406; }","duration":"112.083146ms","start":"2025-01-20T18:10:47.236828Z","end":"2025-01-20T18:10:47.348911Z","steps":["trace[1345087388] 'agreement among raft nodes before linearized reading'  (duration: 101.676787ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T18:10:47.762333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.589458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T18:10:47.762548Z","caller":"traceutil/trace.go:171","msg":"trace[807450841] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:419; }","duration":"103.829802ms","start":"2025-01-20T18:10:47.658703Z","end":"2025-01-20T18:10:47.762533Z","steps":["trace[807450841] 'agreement among raft nodes before linearized reading'  (duration: 103.57135ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:16:12 up  1:58,  0 users,  load average: 0.25, 1.47, 2.36
	Linux addons-483552 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c644cc930f8a05b1d5b4990de8afae19938b2a66dbb40663d4a6acf04d395a47] <==
	I0120 18:14:05.337902       1 main.go:301] handling current node
	I0120 18:14:15.333860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:14:15.333976       1 main.go:301] handling current node
	I0120 18:14:25.333646       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:14:25.333686       1 main.go:301] handling current node
	I0120 18:14:35.333896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:14:35.334031       1 main.go:301] handling current node
	I0120 18:14:45.333854       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:14:45.333894       1 main.go:301] handling current node
	I0120 18:14:55.336877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:14:55.337057       1 main.go:301] handling current node
	I0120 18:15:05.341873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:15:05.341982       1 main.go:301] handling current node
	I0120 18:15:15.333891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:15:15.334056       1 main.go:301] handling current node
	I0120 18:15:25.339500       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:15:25.339533       1 main.go:301] handling current node
	I0120 18:15:35.339068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:15:35.339104       1 main.go:301] handling current node
	I0120 18:15:45.334571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:15:45.334604       1 main.go:301] handling current node
	I0120 18:15:55.340751       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:15:55.340795       1 main.go:301] handling current node
	I0120 18:16:05.341856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0120 18:16:05.341997       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1981feda8abc376061360f7a5bb875c16177c8ce626b528bdfa8f0896cd5c462] <==
	I0120 18:13:13.565107       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.136.64"}
	I0120 18:13:44.533694       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0120 18:13:45.563581       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0120 18:13:50.027425       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0120 18:13:50.288818       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0120 18:13:50.637851       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.207.32"}
	I0120 18:13:55.944386       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0120 18:14:09.861020       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 18:14:09.861142       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 18:14:09.891577       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 18:14:09.891646       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 18:14:09.915157       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 18:14:09.915212       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 18:14:10.021432       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 18:14:10.021486       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 18:14:10.034718       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 18:14:10.034875       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0120 18:14:11.023091       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0120 18:14:11.034755       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0120 18:14:11.049988       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0120 18:14:46.572197       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0120 18:14:46.583284       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0120 18:14:46.593711       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0120 18:15:01.596312       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0120 18:16:10.301424       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.241.160"}
	
	
	==> kube-controller-manager [f8ab10cd574bd4293be8244fe853d42dec139f1b151c11461c94975d70b02a2d] <==
	E0120 18:15:28.956727       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="roles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:kube-system:namespace-controller\" cannot watch resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"local-path-storage\"" logger="namespace-controller" resource="rbac.authorization.k8s.io/v1, Resource=roles"
	W0120 18:15:31.473996       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 18:15:31.475000       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0120 18:15:31.475925       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 18:15:31.475965       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0120 18:15:33.966788       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0120 18:15:35.542980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5d76cffbc" duration="5.825µs"
	W0120 18:15:41.498995       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 18:15:41.500092       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0120 18:15:41.501080       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 18:15:41.501119       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 18:16:08.498047       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 18:16:08.499101       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 18:16:08.500157       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 18:16:08.500195       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 18:16:09.397836       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 18:16:09.398929       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0120 18:16:09.399967       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 18:16:09.400006       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0120 18:16:10.042605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="48.33743ms"
	I0120 18:16:10.074720       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="32.048655ms"
	I0120 18:16:10.074826       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="66.739µs"
	I0120 18:16:10.086882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="50.657µs"
	I0120 18:16:12.292270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="25.386348ms"
	I0120 18:16:12.292502       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="48.761µs"
	
	
	==> kube-proxy [2b7ceafc84246299f3c3a07ff9b34f58bc49e27b440a4b5a27bc9caafaf866dd] <==
	I0120 18:10:48.931894       1 server_linux.go:66] "Using iptables proxy"
	I0120 18:10:49.513968       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0120 18:10:49.514038       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 18:10:49.570483       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0120 18:10:49.570663       1 server_linux.go:170] "Using iptables Proxier"
	I0120 18:10:49.574026       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 18:10:49.574412       1 server.go:497] "Version info" version="v1.32.0"
	I0120 18:10:49.574634       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 18:10:49.575883       1 config.go:199] "Starting service config controller"
	I0120 18:10:49.575949       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 18:10:49.575998       1 config.go:105] "Starting endpoint slice config controller"
	I0120 18:10:49.576027       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 18:10:49.576504       1 config.go:329] "Starting node config controller"
	I0120 18:10:49.576552       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 18:10:49.733900       1 shared_informer.go:320] Caches are synced for node config
	I0120 18:10:49.733943       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 18:10:49.740757       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6ad3359fc83c313a2fbd8962d6bdc4afaad6c42b8a7e9bcd8b4e40daada7782e] <==
	W0120 18:10:37.034067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 18:10:37.034203       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.034356       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 18:10:37.034426       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.034518       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 18:10:37.034560       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.036868       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0120 18:10:37.037003       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.037119       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 18:10:37.037195       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.037484       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 18:10:37.037569       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.037680       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 18:10:37.037746       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.037870       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 18:10:37.038034       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.038318       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 18:10:37.038481       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.038601       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 18:10:37.038781       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.038847       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 18:10:37.038951       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 18:10:37.038658       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 18:10:37.039062       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0120 18:10:38.324777       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 18:15:38 addons-483552 kubelet[1528]: E0120 18:15:38.897552    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3a52725f03405710aa761f58e42498385d50b78bc2bd69e286ab99716cbff53d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3a52725f03405710aa761f58e42498385d50b78bc2bd69e286ab99716cbff53d/diff: no such file or directory, extraDiskErr: <nil>
	Jan 20 18:15:38 addons-483552 kubelet[1528]: E0120 18:15:38.901663    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0dd10d946f20a17ad6525a2bfc66b62a75399c3f2d959e40dc6d3cf755d5bb05/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0dd10d946f20a17ad6525a2bfc66b62a75399c3f2d959e40dc6d3cf755d5bb05/diff: no such file or directory, extraDiskErr: <nil>
	Jan 20 18:15:38 addons-483552 kubelet[1528]: E0120 18:15:38.901675    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/88abe7fe0dc1429e1bade658de185e5b570755228d13fedb29a68b5db62cb596/diff" to get inode usage: stat /var/lib/containers/storage/overlay/88abe7fe0dc1429e1bade658de185e5b570755228d13fedb29a68b5db62cb596/diff: no such file or directory, extraDiskErr: <nil>
	Jan 20 18:15:38 addons-483552 kubelet[1528]: E0120 18:15:38.903930    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e7b91dd338baed9992e1918de10305e6d236f2aceb6ae061fbcc89121cfa0d4b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e7b91dd338baed9992e1918de10305e6d236f2aceb6ae061fbcc89121cfa0d4b/diff: no such file or directory, extraDiskErr: <nil>
	Jan 20 18:15:38 addons-483552 kubelet[1528]: E0120 18:15:38.903970    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e7b91dd338baed9992e1918de10305e6d236f2aceb6ae061fbcc89121cfa0d4b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e7b91dd338baed9992e1918de10305e6d236f2aceb6ae061fbcc89121cfa0d4b/diff: no such file or directory, extraDiskErr: <nil>
	Jan 20 18:15:38 addons-483552 kubelet[1528]: E0120 18:15:38.903989    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0dd10d946f20a17ad6525a2bfc66b62a75399c3f2d959e40dc6d3cf755d5bb05/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0dd10d946f20a17ad6525a2bfc66b62a75399c3f2d959e40dc6d3cf755d5bb05/diff: no such file or directory, extraDiskErr: <nil>
	Jan 20 18:15:38 addons-483552 kubelet[1528]: E0120 18:15:38.918468    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/88abe7fe0dc1429e1bade658de185e5b570755228d13fedb29a68b5db62cb596/diff" to get inode usage: stat /var/lib/containers/storage/overlay/88abe7fe0dc1429e1bade658de185e5b570755228d13fedb29a68b5db62cb596/diff: no such file or directory, extraDiskErr: <nil>
	Jan 20 18:15:38 addons-483552 kubelet[1528]: E0120 18:15:38.918612    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1d883e2b469e2a9c33c0c3f8af757964d622a138232fe164df02e1440cbdd209/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1d883e2b469e2a9c33c0c3f8af757964d622a138232fe164df02e1440cbdd209/diff: no such file or directory, extraDiskErr: <nil>
	Jan 20 18:15:38 addons-483552 kubelet[1528]: E0120 18:15:38.963368    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9bb9ca386bbbb18ac46ebb4eb075e0ff758fe4d228962aac87606c1f90b87473/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9bb9ca386bbbb18ac46ebb4eb075e0ff758fe4d228962aac87606c1f90b87473/diff: no such file or directory, extraDiskErr: <nil>
	Jan 20 18:15:39 addons-483552 kubelet[1528]: I0120 18:15:39.179870    1528 scope.go:117] "RemoveContainer" containerID="efd0cb1d5a5b896675fc28687eb1b2676cf366deefe7d14ae9c16188be766411"
	Jan 20 18:15:39 addons-483552 kubelet[1528]: I0120 18:15:39.206770    1528 scope.go:117] "RemoveContainer" containerID="c9d7ee3a6e6cf1c987f7707b365a181e91ffe97e9a06604c9321b953993a6259"
	Jan 20 18:15:39 addons-483552 kubelet[1528]: E0120 18:15:39.216155    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737396939214662907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595464,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 18:15:39 addons-483552 kubelet[1528]: E0120 18:15:39.216194    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737396939214662907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595464,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 18:15:39 addons-483552 kubelet[1528]: I0120 18:15:39.231865    1528 scope.go:117] "RemoveContainer" containerID="6f44fc7a769f48cc54214f891c6d1940e6538ba66cbf2114f23aae49b3efbc26"
	Jan 20 18:15:49 addons-483552 kubelet[1528]: E0120 18:15:49.219291    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737396949219041995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595464,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 18:15:49 addons-483552 kubelet[1528]: E0120 18:15:49.219330    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737396949219041995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595464,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 18:15:59 addons-483552 kubelet[1528]: E0120 18:15:59.222364    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737396959222126430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595464,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 18:15:59 addons-483552 kubelet[1528]: E0120 18:15:59.222410    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737396959222126430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595464,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 18:16:09 addons-483552 kubelet[1528]: E0120 18:16:09.225838    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737396969225474508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595464,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 18:16:09 addons-483552 kubelet[1528]: E0120 18:16:09.225887    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737396969225474508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595464,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 18:16:10 addons-483552 kubelet[1528]: I0120 18:16:10.052254    1528 memory_manager.go:355] "RemoveStaleState removing state" podUID="504efd3c-dcc2-4127-babf-9c700feffc45" containerName="cloud-spanner-emulator"
	Jan 20 18:16:10 addons-483552 kubelet[1528]: I0120 18:16:10.052321    1528 memory_manager.go:355] "RemoveStaleState removing state" podUID="2dd51f94-ead2-478b-b867-2554b50f79f3" containerName="helper-pod"
	Jan 20 18:16:10 addons-483552 kubelet[1528]: I0120 18:16:10.052332    1528 memory_manager.go:355] "RemoveStaleState removing state" podUID="5abd3eb0-e28e-499f-8f9f-d66db7a6b1bb" containerName="local-path-provisioner"
	Jan 20 18:16:10 addons-483552 kubelet[1528]: I0120 18:16:10.119745    1528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jd5t\" (UniqueName: \"kubernetes.io/projected/ec296f41-614d-4cd5-a40a-a33daaa16e94-kube-api-access-4jd5t\") pod \"hello-world-app-7d9564db4-rw2hh\" (UID: \"ec296f41-614d-4cd5-a40a-a33daaa16e94\") " pod="default/hello-world-app-7d9564db4-rw2hh"
	Jan 20 18:16:10 addons-483552 kubelet[1528]: W0120 18:16:10.418848    1528 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/61755a0b0b5e8584c2f41d807cbd42398facd6d3eed9953bf1fa602f0fa1cb5b/crio-5076c4fc6cde4da70ef5af6f76061195bc7642f8895864f61e4ed6463d8269ed WatchSource:0}: Error finding container 5076c4fc6cde4da70ef5af6f76061195bc7642f8895864f61e4ed6463d8269ed: Status 404 returned error can't find the container with id 5076c4fc6cde4da70ef5af6f76061195bc7642f8895864f61e4ed6463d8269ed
	
	
	==> storage-provisioner [86813762591e0963e600ee4d846cd622e73cb05d179025231c14cdfd21e5e2dc] <==
	I0120 18:11:26.987611       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 18:11:26.999722       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 18:11:26.999772       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 18:11:27.008508       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 18:11:27.009880       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-483552_66a6003f-517e-4003-ad3d-934db3adbd6c!
	I0120 18:11:27.011357       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"004a137c-424d-4844-b6c2-562cad55cef9", APIVersion:"v1", ResourceVersion:"924", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-483552_66a6003f-517e-4003-ad3d-934db3adbd6c became leader
	I0120 18:11:27.110205       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-483552_66a6003f-517e-4003-ad3d-934db3adbd6c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-483552 -n addons-483552
helpers_test.go:261: (dbg) Run:  kubectl --context addons-483552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-jrgcw ingress-nginx-admission-patch-jkpt5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-483552 describe pod ingress-nginx-admission-create-jrgcw ingress-nginx-admission-patch-jkpt5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-483552 describe pod ingress-nginx-admission-create-jrgcw ingress-nginx-admission-patch-jkpt5: exit status 1 (124.574934ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jrgcw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jkpt5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-483552 describe pod ingress-nginx-admission-create-jrgcw ingress-nginx-admission-patch-jkpt5: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-483552 addons disable ingress-dns --alsologtostderr -v=1: (1.658691024s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-483552 addons disable ingress --alsologtostderr -v=1: (7.819941792s)
--- FAIL: TestAddons/parallel/Ingress (153.27s)

                                                
                                    

Test pass (298/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.91
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.0/json-events 5.78
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.09
18 TestDownloadOnly/v1.32.0/DeleteAll 0.22
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 184.9
31 TestAddons/serial/GCPAuth/Namespaces 0.23
32 TestAddons/serial/GCPAuth/FakeCredentials 10.93
35 TestAddons/parallel/Registry 17.68
37 TestAddons/parallel/InspektorGadget 12.01
38 TestAddons/parallel/MetricsServer 6.86
40 TestAddons/parallel/CSI 46.76
41 TestAddons/parallel/Headlamp 18.31
42 TestAddons/parallel/CloudSpanner 6.58
43 TestAddons/parallel/LocalPath 53.58
44 TestAddons/parallel/NvidiaDevicePlugin 6.54
45 TestAddons/parallel/Yakd 11.79
47 TestAddons/StoppedEnableDisable 12.22
48 TestCertOptions 36.51
49 TestCertExpiration 241.01
51 TestForceSystemdFlag 36.36
52 TestForceSystemdEnv 41.99
58 TestErrorSpam/setup 30.65
59 TestErrorSpam/start 0.78
60 TestErrorSpam/status 1.09
61 TestErrorSpam/pause 1.79
62 TestErrorSpam/unpause 1.82
63 TestErrorSpam/stop 1.49
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 77.91
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 55.03
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.41
75 TestFunctional/serial/CacheCmd/cache/add_local 1.45
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.19
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.16
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 44.29
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.78
86 TestFunctional/serial/LogsFileCmd 1.78
87 TestFunctional/serial/InvalidService 4.25
89 TestFunctional/parallel/ConfigCmd 0.51
90 TestFunctional/parallel/DashboardCmd 9.52
91 TestFunctional/parallel/DryRun 0.55
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 11.77
98 TestFunctional/parallel/AddonsCmd 0.22
99 TestFunctional/parallel/PersistentVolumeClaim 23.97
101 TestFunctional/parallel/SSHCmd 0.66
102 TestFunctional/parallel/CpCmd 2.54
104 TestFunctional/parallel/FileSync 0.48
105 TestFunctional/parallel/CertSync 2.17
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.86
113 TestFunctional/parallel/License 0.3
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.52
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.18
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.45
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
129 TestFunctional/parallel/MountCmd/any-port 8.9
130 TestFunctional/parallel/ServiceCmd/List 0.54
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.39
135 TestFunctional/parallel/MountCmd/specific-port 2.22
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.63
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.29
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
144 TestFunctional/parallel/ImageCommands/Setup 0.89
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.7
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 169.3
162 TestMultiControlPlane/serial/DeployApp 8.83
163 TestMultiControlPlane/serial/PingHostFromPods 1.74
164 TestMultiControlPlane/serial/AddWorkerNode 62.98
165 TestMultiControlPlane/serial/NodeLabels 0.12
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
167 TestMultiControlPlane/serial/CopyFile 19.09
168 TestMultiControlPlane/serial/StopSecondaryNode 12.86
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
170 TestMultiControlPlane/serial/RestartSecondaryNode 30.62
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.21
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 208.36
173 TestMultiControlPlane/serial/DeleteSecondaryNode 12.54
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
175 TestMultiControlPlane/serial/StopCluster 35.81
176 TestMultiControlPlane/serial/RestartCluster 104.51
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
178 TestMultiControlPlane/serial/AddSecondaryNode 70.36
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.97
183 TestJSONOutput/start/Command 76.84
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.76
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.67
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.89
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
208 TestKicCustomNetwork/create_custom_network 42.11
209 TestKicCustomNetwork/use_default_bridge_network 32.84
210 TestKicExistingNetwork 31.42
211 TestKicCustomSubnet 32.38
212 TestKicStaticIP 32.04
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 74.02
217 TestMountStart/serial/StartWithMountFirst 6.4
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 6.82
220 TestMountStart/serial/VerifyMountSecond 0.25
221 TestMountStart/serial/DeleteFirst 1.63
222 TestMountStart/serial/VerifyMountPostDelete 0.25
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 8.37
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 74.16
229 TestMultiNode/serial/DeployApp2Nodes 6.69
230 TestMultiNode/serial/PingHostFrom2Pods 1.01
231 TestMultiNode/serial/AddNode 27.59
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.65
234 TestMultiNode/serial/CopyFile 10.37
235 TestMultiNode/serial/StopNode 2.26
236 TestMultiNode/serial/StartAfterStop 9.92
237 TestMultiNode/serial/RestartKeepsNodes 87.69
238 TestMultiNode/serial/DeleteNode 5.3
239 TestMultiNode/serial/StopMultiNode 23.8
240 TestMultiNode/serial/RestartMultiNode 56.41
241 TestMultiNode/serial/ValidateNameConflict 36.47
246 TestPreload 127.57
248 TestScheduledStopUnix 106.27
251 TestInsufficientStorage 10.56
252 TestRunningBinaryUpgrade 82.57
254 TestKubernetesUpgrade 410.24
255 TestMissingContainerUpgrade 160.44
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 39.54
259 TestNoKubernetes/serial/StartWithStopK8s 7.58
260 TestNoKubernetes/serial/Start 9.5
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
262 TestNoKubernetes/serial/ProfileList 1.18
263 TestNoKubernetes/serial/Stop 1.28
264 TestNoKubernetes/serial/StartNoArgs 7.55
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
266 TestStoppedBinaryUpgrade/Setup 0.75
267 TestStoppedBinaryUpgrade/Upgrade 91.71
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.36
277 TestPause/serial/Start 77.71
278 TestPause/serial/SecondStartNoReconfiguration 28
279 TestPause/serial/Pause 0.83
280 TestPause/serial/VerifyStatus 0.36
281 TestPause/serial/Unpause 0.72
282 TestPause/serial/PauseAgain 0.92
283 TestPause/serial/DeletePaused 2.71
284 TestPause/serial/VerifyDeletedResources 0.41
292 TestNetworkPlugins/group/false 4.77
297 TestStartStop/group/old-k8s-version/serial/FirstStart 181.61
299 TestStartStop/group/no-preload/serial/FirstStart 74.62
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.66
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.28
302 TestStartStop/group/old-k8s-version/serial/Stop 12.67
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
304 TestStartStop/group/old-k8s-version/serial/SecondStart 135.94
305 TestStartStop/group/no-preload/serial/DeployApp 10.36
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
307 TestStartStop/group/no-preload/serial/Stop 12.35
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
309 TestStartStop/group/no-preload/serial/SecondStart 288.49
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
313 TestStartStop/group/old-k8s-version/serial/Pause 2.94
315 TestStartStop/group/embed-certs/serial/FirstStart 51.61
316 TestStartStop/group/embed-certs/serial/DeployApp 11.38
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
318 TestStartStop/group/embed-certs/serial/Stop 11.94
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
320 TestStartStop/group/embed-certs/serial/SecondStart 278.29
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
324 TestStartStop/group/no-preload/serial/Pause 3.11
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.67
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.77
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
335 TestStartStop/group/embed-certs/serial/Pause 3.11
337 TestStartStop/group/newest-cni/serial/FirstStart 36.53
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
340 TestStartStop/group/newest-cni/serial/Stop 1.24
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
342 TestStartStop/group/newest-cni/serial/SecondStart 17.8
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
346 TestStartStop/group/newest-cni/serial/Pause 3.09
347 TestNetworkPlugins/group/auto/Start 77.28
348 TestNetworkPlugins/group/auto/KubeletFlags 0.32
349 TestNetworkPlugins/group/auto/NetCatPod 11.3
350 TestNetworkPlugins/group/auto/DNS 0.18
351 TestNetworkPlugins/group/auto/Localhost 0.16
352 TestNetworkPlugins/group/auto/HairPin 0.16
353 TestNetworkPlugins/group/kindnet/Start 84.34
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.12
358 TestNetworkPlugins/group/calico/Start 71.58
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
361 TestNetworkPlugins/group/kindnet/NetCatPod 13.43
362 TestNetworkPlugins/group/kindnet/DNS 0.37
363 TestNetworkPlugins/group/kindnet/Localhost 0.24
364 TestNetworkPlugins/group/kindnet/HairPin 0.31
365 TestNetworkPlugins/group/custom-flannel/Start 61.52
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.33
368 TestNetworkPlugins/group/calico/NetCatPod 13.34
369 TestNetworkPlugins/group/calico/DNS 0.27
370 TestNetworkPlugins/group/calico/Localhost 0.18
371 TestNetworkPlugins/group/calico/HairPin 0.21
372 TestNetworkPlugins/group/enable-default-cni/Start 44.11
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.34
375 TestNetworkPlugins/group/custom-flannel/DNS 0.27
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
380 TestNetworkPlugins/group/flannel/Start 61.64
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
384 TestNetworkPlugins/group/bridge/Start 69.3
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
387 TestNetworkPlugins/group/flannel/NetCatPod 11.38
388 TestNetworkPlugins/group/flannel/DNS 0.2
389 TestNetworkPlugins/group/flannel/Localhost 0.16
390 TestNetworkPlugins/group/flannel/HairPin 0.15
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
392 TestNetworkPlugins/group/bridge/NetCatPod 10.29
393 TestNetworkPlugins/group/bridge/DNS 0.19
394 TestNetworkPlugins/group/bridge/Localhost 0.15
395 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-505162 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-505162 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.908750902s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 18:09:39.565680  304547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 18:09:39.565772  304547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-505162
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-505162: exit status 85 (91.459465ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-505162 | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC |          |
	|         | -p download-only-505162        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 18:09:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 18:09:31.705452  304552 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:09:31.705610  304552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:09:31.705636  304552 out.go:358] Setting ErrFile to fd 2...
	I0120 18:09:31.705653  304552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:09:31.705993  304552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	W0120 18:09:31.706190  304552 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20109-299163/.minikube/config/config.json: open /home/jenkins/minikube-integration/20109-299163/.minikube/config/config.json: no such file or directory
	I0120 18:09:31.706701  304552 out.go:352] Setting JSON to true
	I0120 18:09:31.707638  304552 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6716,"bootTime":1737389856,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0120 18:09:31.707712  304552 start.go:139] virtualization:  
	I0120 18:09:31.711938  304552 out.go:97] [download-only-505162] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0120 18:09:31.712189  304552 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 18:09:31.712261  304552 notify.go:220] Checking for updates...
	I0120 18:09:31.715172  304552 out.go:169] MINIKUBE_LOCATION=20109
	I0120 18:09:31.718229  304552 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 18:09:31.721221  304552 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	I0120 18:09:31.724125  304552 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	I0120 18:09:31.727139  304552 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0120 18:09:31.732905  304552 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 18:09:31.733238  304552 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 18:09:31.759417  304552 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 18:09:31.759517  304552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:09:31.816786  304552 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 18:09:31.80709455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:09:31.816896  304552 docker.go:318] overlay module found
	I0120 18:09:31.819882  304552 out.go:97] Using the docker driver based on user configuration
	I0120 18:09:31.819910  304552 start.go:297] selected driver: docker
	I0120 18:09:31.819918  304552 start.go:901] validating driver "docker" against <nil>
	I0120 18:09:31.820035  304552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:09:31.865951  304552 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 18:09:31.857577743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:09:31.866184  304552 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 18:09:31.866473  304552 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0120 18:09:31.866623  304552 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 18:09:31.869820  304552 out.go:169] Using Docker driver with root privileges
	I0120 18:09:31.872579  304552 cni.go:84] Creating CNI manager for ""
	I0120 18:09:31.872635  304552 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0120 18:09:31.872649  304552 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 18:09:31.872728  304552 start.go:340] cluster config:
	{Name:download-only-505162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-505162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 18:09:31.875893  304552 out.go:97] Starting "download-only-505162" primary control-plane node in "download-only-505162" cluster
	I0120 18:09:31.875920  304552 cache.go:121] Beginning downloading kic base image for docker with crio
	I0120 18:09:31.878673  304552 out.go:97] Pulling base image v0.0.46 ...
	I0120 18:09:31.878719  304552 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 18:09:31.878807  304552 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 18:09:31.894265  304552 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 18:09:31.894445  304552 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0120 18:09:31.894543  304552 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 18:09:31.939905  304552 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0120 18:09:31.939934  304552 cache.go:56] Caching tarball of preloaded images
	I0120 18:09:31.940140  304552 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 18:09:31.943458  304552 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 18:09:31.943482  304552 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0120 18:09:32.029486  304552 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-505162 host does not exist
	  To start a cluster, run: "minikube start -p download-only-505162"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-505162
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (5.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-689857 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-689857 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.78461483s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (5.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 18:09:45.803643  304547 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
I0120 18:09:45.803692  304547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-689857
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-689857: exit status 85 (92.934991ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-505162 | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC |                     |
	|         | -p download-only-505162        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC | 20 Jan 25 18:09 UTC |
	| delete  | -p download-only-505162        | download-only-505162 | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC | 20 Jan 25 18:09 UTC |
	| start   | -o=json --download-only        | download-only-689857 | jenkins | v1.35.0 | 20 Jan 25 18:09 UTC |                     |
	|         | -p download-only-689857        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 18:09:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 18:09:40.071970  304754 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:09:40.072120  304754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:09:40.072131  304754 out.go:358] Setting ErrFile to fd 2...
	I0120 18:09:40.072161  304754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:09:40.072430  304754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	I0120 18:09:40.072873  304754 out.go:352] Setting JSON to true
	I0120 18:09:40.073830  304754 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6724,"bootTime":1737389856,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0120 18:09:40.073912  304754 start.go:139] virtualization:  
	I0120 18:09:40.077458  304754 out.go:97] [download-only-689857] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 18:09:40.077816  304754 notify.go:220] Checking for updates...
	I0120 18:09:40.081711  304754 out.go:169] MINIKUBE_LOCATION=20109
	I0120 18:09:40.084883  304754 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 18:09:40.087907  304754 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	I0120 18:09:40.091331  304754 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	I0120 18:09:40.094388  304754 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0120 18:09:40.099999  304754 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 18:09:40.100357  304754 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 18:09:40.127267  304754 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 18:09:40.127385  304754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:09:40.190577  304754 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 18:09:40.179550965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:09:40.190695  304754 docker.go:318] overlay module found
	I0120 18:09:40.193808  304754 out.go:97] Using the docker driver based on user configuration
	I0120 18:09:40.193871  304754 start.go:297] selected driver: docker
	I0120 18:09:40.193879  304754 start.go:901] validating driver "docker" against <nil>
	I0120 18:09:40.194000  304754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:09:40.249714  304754 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 18:09:40.240759217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:09:40.250009  304754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 18:09:40.250294  304754 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0120 18:09:40.250455  304754 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 18:09:40.253615  304754 out.go:169] Using Docker driver with root privileges
	I0120 18:09:40.256534  304754 cni.go:84] Creating CNI manager for ""
	I0120 18:09:40.256608  304754 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0120 18:09:40.256618  304754 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 18:09:40.256700  304754 start.go:340] cluster config:
	{Name:download-only-689857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:download-only-689857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 18:09:40.259723  304754 out.go:97] Starting "download-only-689857" primary control-plane node in "download-only-689857" cluster
	I0120 18:09:40.259762  304754 cache.go:121] Beginning downloading kic base image for docker with crio
	I0120 18:09:40.262751  304754 out.go:97] Pulling base image v0.0.46 ...
	I0120 18:09:40.262807  304754 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 18:09:40.262930  304754 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 18:09:40.279457  304754 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 18:09:40.279612  304754 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0120 18:09:40.279648  304754 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0120 18:09:40.279667  304754 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0120 18:09:40.279679  304754 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0120 18:09:40.322679  304754 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I0120 18:09:40.322716  304754 cache.go:56] Caching tarball of preloaded images
	I0120 18:09:40.323545  304754 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 18:09:40.326690  304754 out.go:97] Downloading Kubernetes v1.32.0 preload ...
	I0120 18:09:40.326728  304754 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 ...
	I0120 18:09:40.447109  304754 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:d3dc3b83b826438926b7b91af837ed7b -> /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I0120 18:09:44.209620  304754 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 ...
	I0120 18:09:44.209729  304754 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20109-299163/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-689857 host does not exist
	  To start a cluster, run: "minikube start -p download-only-689857"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-689857
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 18:09:47.122199  304547 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-052288 --alsologtostderr --binary-mirror http://127.0.0.1:46851 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-052288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-052288
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-483552
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-483552: exit status 85 (81.717567ms)

                                                
                                                
-- stdout --
	* Profile "addons-483552" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-483552"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-483552
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-483552: exit status 85 (81.190834ms)

                                                
                                                
-- stdout --
	* Profile "addons-483552" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-483552"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (184.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-483552 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-483552 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m4.895165767s)
--- PASS: TestAddons/Setup (184.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-483552 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-483552 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.93s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-483552 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-483552 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b31945b7-9193-4dd1-8667-5a108083b0d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b31945b7-9193-4dd1-8667-5a108083b0d6] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.006180308s
addons_test.go:633: (dbg) Run:  kubectl --context addons-483552 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-483552 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-483552 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-483552 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.93s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 11.504921ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-8cc5t" [11eece58-b31f-45d9-9831-26bd887b6621] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003117007s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q7m8l" [b119f04c-b7e8-4eba-926b-814b9001158d] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003604327s
addons_test.go:331: (dbg) Run:  kubectl --context addons-483552 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-483552 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-483552 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.706980761s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 ip
2025/01/20 18:13:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n82gp" [be1af935-9f2d-4d9d-987c-5ce5bda827b4] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00626443s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-483552 addons disable inspektor-gadget --alsologtostderr -v=1: (6.000346861s)
--- PASS: TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 8.482389ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-l78fs" [24ef1eeb-9b7c-42cc-a2e9-ba2c03c3cb4c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004217193s
addons_test.go:402: (dbg) Run:  kubectl --context addons-483552 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0120 18:13:30.365078  304547 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0120 18:13:30.371527  304547 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 18:13:30.371560  304547 kapi.go:107] duration metric: took 10.211739ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.229387ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-483552 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-483552 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c2d69540-0a52-4d8d-9b6d-371821e31dba] Pending
helpers_test.go:344: "task-pv-pod" [c2d69540-0a52-4d8d-9b6d-371821e31dba] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c2d69540-0a52-4d8d-9b6d-371821e31dba] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004046969s
addons_test.go:511: (dbg) Run:  kubectl --context addons-483552 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-483552 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-483552 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-483552 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-483552 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-483552 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-483552 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
I0120 18:13:59.660158  304547 kapi.go:150] Service nginx in namespace default found.
helpers_test.go:344: "task-pv-pod-restore" [17e1aa97-c178-4e99-b509-0808027a5b11] Pending
helpers_test.go:344: "task-pv-pod-restore" [17e1aa97-c178-4e99-b509-0808027a5b11] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004825388s
addons_test.go:553: (dbg) Run:  kubectl --context addons-483552 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-483552 delete pod task-pv-pod-restore: (1.29677829s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-483552 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-483552 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-483552 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.915874636s)
--- PASS: TestAddons/parallel/CSI (46.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-483552 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-n77c7" [883edffc-17a6-48e1-b8fc-2f01d7810ea6] Pending
helpers_test.go:344: "headlamp-69d78d796f-n77c7" [883edffc-17a6-48e1-b8fc-2f01d7810ea6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-n77c7" [883edffc-17a6-48e1-b8fc-2f01d7810ea6] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003543642s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-483552 addons disable headlamp --alsologtostderr -v=1: (6.325918808s)
--- PASS: TestAddons/parallel/Headlamp (18.31s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-tff4k" [504efd3c-dcc2-4127-babf-9c700feffc45] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00382219s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-483552 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-483552 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483552 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c26af9f2-3468-4438-a382-e0f9488e1139] Pending
helpers_test.go:344: "test-local-path" [c26af9f2-3468-4438-a382-e0f9488e1139] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c26af9f2-3468-4438-a382-e0f9488e1139] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003509949s
addons_test.go:906: (dbg) Run:  kubectl --context addons-483552 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 ssh "cat /opt/local-path-provisioner/pvc-33814caa-87d3-4ef4-8953-290c67f6d8c4_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-483552 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-483552 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-483552 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.40532511s)
--- PASS: TestAddons/parallel/LocalPath (53.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sbfpn" [1873cb8c-d69f-4e43-b297-4784a8e6b0c1] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004058834s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-mxtc2" [05432d92-e15c-41c0-a9da-f6f87fbb8822] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003732281s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-483552 addons disable yakd --alsologtostderr -v=1: (5.788182231s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-483552
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-483552: (11.908851987s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-483552
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-483552
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-483552
--- PASS: TestAddons/StoppedEnableDisable (12.22s)

                                                
                                    
x
+
TestCertOptions (36.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-530150 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-530150 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.819226607s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-530150 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-530150 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-530150 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-530150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-530150
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-530150: (2.020311432s)
--- PASS: TestCertOptions (36.51s)

                                                
                                    
x
+
TestCertExpiration (241.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-885901 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0120 18:58:42.906244  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-885901 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.386130013s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-885901 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-885901 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.14087359s)
helpers_test.go:175: Cleaning up "cert-expiration-885901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-885901
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-885901: (2.485263927s)
--- PASS: TestCertExpiration (241.01s)

                                                
                                    
x
+
TestForceSystemdFlag (36.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-798085 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0120 18:57:53.582446  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-798085 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.235927418s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-798085 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-798085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-798085
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-798085: (2.728198511s)
--- PASS: TestForceSystemdFlag (36.36s)

                                                
                                    
x
+
TestForceSystemdEnv (41.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-863419 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-863419 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.087326512s)
helpers_test.go:175: Cleaning up "force-systemd-env-863419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-863419
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-863419: (2.906075381s)
--- PASS: TestForceSystemdEnv (41.99s)

                                                
                                    
x
+
TestErrorSpam/setup (30.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-455495 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-455495 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-455495 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-455495 --driver=docker  --container-runtime=crio: (30.648710969s)
--- PASS: TestErrorSpam/setup (30.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 stop: (1.285099634s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-455495 --log_dir /tmp/nospam-455495 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20109-299163/.minikube/files/etc/test/nested/copy/304547/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632700 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0120 18:17:53.584581  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:53.591029  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:53.602568  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:53.624133  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:53.665820  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:53.747283  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:53.908750  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:54.230498  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:54.872522  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:56.153856  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:17:58.716077  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:18:03.838329  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:18:14.080149  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:18:34.561810  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-632700 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.911125264s)
--- PASS: TestFunctional/serial/StartWithProxy (77.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 18:18:41.771408  304547 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632700 --alsologtostderr -v=8
E0120 18:19:15.523924  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-632700 --alsologtostderr -v=8: (55.013008376s)
functional_test.go:663: soft start took 55.025997105s for "functional-632700" cluster.
I0120 18:19:36.784771  304547 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (55.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-632700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-632700 cache add registry.k8s.io/pause:3.1: (1.597152573s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-632700 cache add registry.k8s.io/pause:3.3: (1.410707953s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-632700 cache add registry.k8s.io/pause:latest: (1.405891596s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-632700 /tmp/TestFunctionalserialCacheCmdcacheadd_local2658460862/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 cache add minikube-local-cache-test:functional-632700
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 cache delete minikube-local-cache-test:functional-632700
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-632700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.44989ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-632700 cache reload: (1.244709294s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 kubectl -- --context functional-632700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-632700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-632700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.28639921s)
functional_test.go:761: restart took 44.286540475s for "functional-632700" cluster.
I0120 18:20:30.110978  304547 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (44.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-632700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-632700 logs: (1.776018534s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 logs --file /tmp/TestFunctionalserialLogsFileCmd1007488196/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-632700 logs --file /tmp/TestFunctionalserialLogsFileCmd1007488196/001/logs.txt: (1.781681738s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-632700 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-632700
E0120 18:20:37.446048  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-632700: exit status 115 (578.081336ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31358 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-632700 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 config get cpus: exit status 14 (81.269221ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 config get cpus: exit status 14 (78.258486ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-632700 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-632700 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 332050: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632700 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-632700 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (307.712682ms)

                                                
                                                
-- stdout --
	* [functional-632700] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 18:21:11.393981  331744 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:21:11.394098  331744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:21:11.394103  331744 out.go:358] Setting ErrFile to fd 2...
	I0120 18:21:11.394108  331744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:21:11.395814  331744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	I0120 18:21:11.397419  331744 out.go:352] Setting JSON to false
	I0120 18:21:11.398703  331744 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7416,"bootTime":1737389856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0120 18:21:11.398784  331744 start.go:139] virtualization:  
	I0120 18:21:11.403473  331744 out.go:177] * [functional-632700] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 18:21:11.407544  331744 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 18:21:11.407691  331744 notify.go:220] Checking for updates...
	I0120 18:21:11.416013  331744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 18:21:11.420143  331744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	I0120 18:21:11.424216  331744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	I0120 18:21:11.428256  331744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 18:21:11.437528  331744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 18:21:11.449618  331744 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:21:11.450454  331744 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 18:21:11.504626  331744 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 18:21:11.504752  331744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:21:11.606379  331744 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 18:21:11.596553087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:21:11.606492  331744 docker.go:318] overlay module found
	I0120 18:21:11.609926  331744 out.go:177] * Using the docker driver based on existing profile
	I0120 18:21:11.613013  331744 start.go:297] selected driver: docker
	I0120 18:21:11.613035  331744 start.go:901] validating driver "docker" against &{Name:functional-632700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-632700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 18:21:11.613159  331744 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 18:21:11.616894  331744 out.go:201] 
	W0120 18:21:11.620005  331744 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 18:21:11.623045  331744 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632700 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632700 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-632700 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (259.580349ms)

                                                
                                                
-- stdout --
	* [functional-632700] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 18:21:11.132218  331692 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:21:11.132415  331692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:21:11.132425  331692 out.go:358] Setting ErrFile to fd 2...
	I0120 18:21:11.132430  331692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:21:11.132779  331692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	I0120 18:21:11.133185  331692 out.go:352] Setting JSON to false
	I0120 18:21:11.134229  331692 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7416,"bootTime":1737389856,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0120 18:21:11.134306  331692 start.go:139] virtualization:  
	I0120 18:21:11.138069  331692 out.go:177] * [functional-632700] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0120 18:21:11.141218  331692 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 18:21:11.141268  331692 notify.go:220] Checking for updates...
	I0120 18:21:11.146997  331692 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 18:21:11.149880  331692 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	I0120 18:21:11.152711  331692 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	I0120 18:21:11.155852  331692 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 18:21:11.158918  331692 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 18:21:11.162481  331692 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:21:11.164763  331692 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 18:21:11.212151  331692 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 18:21:11.212268  331692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:21:11.299595  331692 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 18:21:11.287362408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:21:11.299731  331692 docker.go:318] overlay module found
	I0120 18:21:11.303320  331692 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0120 18:21:11.306271  331692 start.go:297] selected driver: docker
	I0120 18:21:11.306297  331692 start.go:901] validating driver "docker" against &{Name:functional-632700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-632700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 18:21:11.306418  331692 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 18:21:11.310012  331692 out.go:201] 
	W0120 18:21:11.313087  331692 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 18:21:11.316083  331692 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-632700 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-632700 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-qzrc6" [a3a18cdc-9961-4db4-8052-8ae110bc9314] Pending
helpers_test.go:344: "hello-node-connect-8449669db6-qzrc6" [a3a18cdc-9961-4db4-8052-8ae110bc9314] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-qzrc6" [a3a18cdc-9961-4db4-8052-8ae110bc9314] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.012736301s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31278
functional_test.go:1675: http://192.168.49.2:31278: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-qzrc6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31278
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5ebde2d8-59ab-4194-8c68-b45602281cf2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0037442s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-632700 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-632700 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-632700 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-632700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d7d02b51-615c-4ed0-a018-12c52cc258fd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d7d02b51-615c-4ed0-a018-12c52cc258fd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.005001079s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-632700 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-632700 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-632700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [43e8a1ea-178e-460e-ab47-5fe63c851ba5] Pending
helpers_test.go:344: "sp-pod" [43e8a1ea-178e-460e-ab47-5fe63c851ba5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003778817s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-632700 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh -n functional-632700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 cp functional-632700:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3951203073/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh -n functional-632700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh -n functional-632700 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/304547/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo cat /etc/test/nested/copy/304547/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/304547.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo cat /etc/ssl/certs/304547.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/304547.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo cat /usr/share/ca-certificates/304547.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3045472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo cat /etc/ssl/certs/3045472.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3045472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo cat /usr/share/ca-certificates/3045472.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-632700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 ssh "sudo systemctl is-active docker": exit status 1 (421.925543ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 ssh "sudo systemctl is-active containerd": exit status 1 (442.802309ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-632700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-632700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-632700 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-632700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 329433: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-632700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-632700 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7762917b-8b96-48ee-b527-cae758005728] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7762917b-8b96-48ee-b527-cae758005728] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004801407s
I0120 18:20:48.343998  304547 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-632700 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.176.235 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-632700 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-632700 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-632700 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-c9k84" [ff304f06-b569-4fa8-abdd-84414922bf87] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-c9k84" [ff304f06-b569-4fa8-abdd-84414922bf87] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004808076s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "380.102172ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "65.058325ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "344.29314ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "58.73561ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdany-port4276989523/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737397265821260264" to /tmp/TestFunctionalparallelMountCmdany-port4276989523/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737397265821260264" to /tmp/TestFunctionalparallelMountCmdany-port4276989523/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737397265821260264" to /tmp/TestFunctionalparallelMountCmdany-port4276989523/001/test-1737397265821260264
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (335.405908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 18:21:06.157712  304547 retry.go:31] will retry after 425.791285ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 18:21 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 18:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 18:21 test-1737397265821260264
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh cat /mount-9p/test-1737397265821260264
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-632700 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e17a746f-6337-4124-b38f-16e00127f8d5] Pending
helpers_test.go:344: "busybox-mount" [e17a746f-6337-4124-b38f-16e00127f8d5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e17a746f-6337-4124-b38f-16e00127f8d5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e17a746f-6337-4124-b38f-16e00127f8d5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.006949607s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-632700 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdany-port4276989523/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 service list -o json
functional_test.go:1494: Took "504.340912ms" to run "out/minikube-linux-arm64 -p functional-632700 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31767
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31767
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdspecific-port3469176507/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (617.653847ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 18:21:15.333794  304547 retry.go:31] will retry after 260.248083ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdspecific-port3469176507/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 ssh "sudo umount -f /mount-9p": exit status 1 (351.167373ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-632700 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdspecific-port3469176507/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3773879334/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3773879334/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3773879334/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T" /mount1: exit status 1 (930.391634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 18:21:17.866939  304547 retry.go:31] will retry after 405.7089ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-632700 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3773879334/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3773879334/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3773879334/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-632700 version -o=json --components: (1.288006076s)
--- PASS: TestFunctional/parallel/Version/components (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-632700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-632700
localhost/kicbase/echo-server:functional-632700
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632700 image ls --format short --alsologtostderr:
I0120 18:21:28.408958  334631 out.go:345] Setting OutFile to fd 1 ...
I0120 18:21:28.409257  334631 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:28.409293  334631 out.go:358] Setting ErrFile to fd 2...
I0120 18:21:28.409329  334631 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:28.409622  334631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
I0120 18:21:28.410300  334631 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:28.410456  334631 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:28.410995  334631 cli_runner.go:164] Run: docker container inspect functional-632700 --format={{.State.Status}}
I0120 18:21:28.454183  334631 ssh_runner.go:195] Run: systemctl --version
I0120 18:21:28.454245  334631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632700
I0120 18:21:28.473671  334631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/functional-632700/id_rsa Username:docker}
I0120 18:21:28.571006  334631 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-632700 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/library/nginx                 | alpine             | f9d642c42f7bc | 52.3MB |
| docker.io/library/nginx                 | latest             | 781d902f1e046 | 201MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/minikube-local-cache-test     | functional-632700  | b8fdb8604a3de | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.32.0            | 2b5bd0f16085a | 95MB   |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| localhost/kicbase/echo-server           | functional-632700  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-controller-manager | v1.32.0            | a8d049396f6b8 | 88.2MB |
| registry.k8s.io/kube-proxy              | v1.32.0            | 2f50386e20bfd | 98.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.0            | c3ff26fb59f37 | 69MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632700 image ls --format table --alsologtostderr:
I0120 18:21:29.019565  334786 out.go:345] Setting OutFile to fd 1 ...
I0120 18:21:29.023887  334786 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:29.023948  334786 out.go:358] Setting ErrFile to fd 2...
I0120 18:21:29.023970  334786 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:29.024268  334786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
I0120 18:21:29.024993  334786 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:29.025168  334786 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:29.025943  334786 cli_runner.go:164] Run: docker container inspect functional-632700 --format={{.State.Status}}
I0120 18:21:29.049586  334786 ssh_runner.go:195] Run: systemctl --version
I0120 18:21:29.049647  334786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632700
I0120 18:21:29.072478  334786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/functional-632700/id_rsa Username:docker}
I0120 18:21:29.163580  334786 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-632700 image ls --format json --alsologtostderr:
[{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c999
51fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:5ad6d1fbf7a41cf81658450236559fd03a80f78e6a5ed21b08e373dec4948712"],"repoTags":["docker.io/library/nginx:latest"],"size":"201125287"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04b
b24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7
523deb7205e8f0912169f1bbac10","docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52333544"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d","registry.k8s.io/kube
-controller-manager@sha256:d58a480743a6a86609c7733286a8f900edf784908794f8af62c04e4c128d7049"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"88241478"},{"id":"2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"b8fdb8604a3de730a3c1e99a91b0e11e8524be05b35a2f03a3867de7c496d84d","repoDigests":["localhost/minikube-local-cache-test@sha256:39eaf83358c0f630ecaa48b5b9b5b9b18f91144bafd4e7ea3da761fbada6bad2"],"repoTags":["localhost/minikube-local-cache-test:functional-632700"],"size":"3330"},{"id":"2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9ff42b586c0a57f3fc4a0689afe6db4d8f92f7f79bef3b47b2c75ab112e17de7","
registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"94991840"},{"id":"2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67","repoDigests":["registry.k8s.io/kube-proxy@sha256:49a3f84e8bce619ff28cc9158971b0e52c46c250b134f0c480724737dcc28730","registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"98312599"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292
ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-632700"],"size":"4788229"},{"id":"c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:07d9fc2e3dac0822adb69893b537c4e37826154868a4b2355c4d3d94a4f1fb60","registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"68973894"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632700 image ls --format json --alsologtostderr:
I0120 18:21:28.725272  334699 out.go:345] Setting OutFile to fd 1 ...
I0120 18:21:28.725483  334699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:28.725512  334699 out.go:358] Setting ErrFile to fd 2...
I0120 18:21:28.725536  334699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:28.725835  334699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
I0120 18:21:28.726579  334699 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:28.726765  334699 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:28.727299  334699 cli_runner.go:164] Run: docker container inspect functional-632700 --format={{.State.Status}}
I0120 18:21:28.750139  334699 ssh_runner.go:195] Run: systemctl --version
I0120 18:21:28.750194  334699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632700
I0120 18:21:28.777381  334699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/functional-632700/id_rsa Username:docker}
I0120 18:21:28.872014  334699 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-632700 image ls --format yaml --alsologtostderr:
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:5ad6d1fbf7a41cf81658450236559fd03a80f78e6a5ed21b08e373dec4948712
repoTags:
- docker.io/library/nginx:latest
size: "201125287"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
- registry.k8s.io/kube-controller-manager@sha256:d58a480743a6a86609c7733286a8f900edf784908794f8af62c04e4c128d7049
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "88241478"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "52333544"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-632700
size: "4788229"
- id: c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:07d9fc2e3dac0822adb69893b537c4e37826154868a4b2355c4d3d94a4f1fb60
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "68973894"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: b8fdb8604a3de730a3c1e99a91b0e11e8524be05b35a2f03a3867de7c496d84d
repoDigests:
- localhost/minikube-local-cache-test@sha256:39eaf83358c0f630ecaa48b5b9b5b9b18f91144bafd4e7ea3da761fbada6bad2
repoTags:
- localhost/minikube-local-cache-test:functional-632700
size: "3330"
- id: 2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9ff42b586c0a57f3fc4a0689afe6db4d8f92f7f79bef3b47b2c75ab112e17de7
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "94991840"
- id: 2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67
repoDigests:
- registry.k8s.io/kube-proxy@sha256:49a3f84e8bce619ff28cc9158971b0e52c46c250b134f0c480724737dcc28730
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "98312599"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632700 image ls --format yaml --alsologtostderr:
I0120 18:21:28.388483  334632 out.go:345] Setting OutFile to fd 1 ...
I0120 18:21:28.388635  334632 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:28.388665  334632 out.go:358] Setting ErrFile to fd 2...
I0120 18:21:28.388672  334632 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:28.388968  334632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
I0120 18:21:28.389652  334632 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:28.389876  334632 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:28.390854  334632 cli_runner.go:164] Run: docker container inspect functional-632700 --format={{.State.Status}}
I0120 18:21:28.418320  334632 ssh_runner.go:195] Run: systemctl --version
I0120 18:21:28.418390  334632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632700
I0120 18:21:28.438057  334632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/functional-632700/id_rsa Username:docker}
I0120 18:21:28.530720  334632 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632700 ssh pgrep buildkitd: exit status 1 (353.839245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image build -t localhost/my-image:functional-632700 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-632700 image build -t localhost/my-image:functional-632700 testdata/build --alsologtostderr: (3.372136147s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-632700 image build -t localhost/my-image:functional-632700 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c3f8ffc9a04
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-632700
--> 14e56962181
Successfully tagged localhost/my-image:functional-632700
14e56962181ecb4418b4514d8979e09cd66356c91e894c6e4e3c8b8aa251bfe8
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632700 image build -t localhost/my-image:functional-632700 testdata/build --alsologtostderr:
I0120 18:21:29.027952  334792 out.go:345] Setting OutFile to fd 1 ...
I0120 18:21:29.028709  334792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:29.028729  334792 out.go:358] Setting ErrFile to fd 2...
I0120 18:21:29.028736  334792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 18:21:29.029006  334792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
I0120 18:21:29.029747  334792 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:29.030981  334792 config.go:182] Loaded profile config "functional-632700": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 18:21:29.031625  334792 cli_runner.go:164] Run: docker container inspect functional-632700 --format={{.State.Status}}
I0120 18:21:29.052306  334792 ssh_runner.go:195] Run: systemctl --version
I0120 18:21:29.052368  334792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632700
I0120 18:21:29.075140  334792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/functional-632700/id_rsa Username:docker}
I0120 18:21:29.170728  334792 build_images.go:161] Building image from path: /tmp/build.2149171006.tar
I0120 18:21:29.170816  334792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 18:21:29.180334  334792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2149171006.tar
I0120 18:21:29.184911  334792 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2149171006.tar: stat -c "%s %y" /var/lib/minikube/build/build.2149171006.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2149171006.tar': No such file or directory
I0120 18:21:29.184941  334792 ssh_runner.go:362] scp /tmp/build.2149171006.tar --> /var/lib/minikube/build/build.2149171006.tar (3072 bytes)
I0120 18:21:29.213938  334792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2149171006
I0120 18:21:29.227417  334792 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2149171006 -xf /var/lib/minikube/build/build.2149171006.tar
I0120 18:21:29.240952  334792 crio.go:315] Building image: /var/lib/minikube/build/build.2149171006
I0120 18:21:29.241084  334792 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-632700 /var/lib/minikube/build/build.2149171006 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0120 18:21:32.303303  334792 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-632700 /var/lib/minikube/build/build.2149171006 --cgroup-manager=cgroupfs: (3.062163606s)
I0120 18:21:32.303374  334792 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2149171006
I0120 18:21:32.312600  334792 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2149171006.tar
I0120 18:21:32.321819  334792 build_images.go:217] Built localhost/my-image:functional-632700 from /tmp/build.2149171006.tar
I0120 18:21:32.321849  334792 build_images.go:133] succeeded building to: functional-632700
I0120 18:21:32.321854  334792 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2025/01/20 18:21:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-632700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image load --daemon kicbase/echo-server:functional-632700 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-632700 image load --daemon kicbase/echo-server:functional-632700 --alsologtostderr: (1.397450492s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image load --daemon kicbase/echo-server:functional-632700 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-632700
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image load --daemon kicbase/echo-server:functional-632700 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image save kicbase/echo-server:functional-632700 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image rm kicbase/echo-server:functional-632700 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-632700
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-632700 image save --daemon kicbase/echo-server:functional-632700 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-632700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-632700
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-632700
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-632700
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (169.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-340559 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0120 18:22:53.583022  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:23:21.287365  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-340559 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m48.456251801s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (169.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-340559 -- rollout status deployment/busybox: (5.908895776s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-bxtcz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-gg2pr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-l4hfh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-bxtcz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-gg2pr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-l4hfh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-bxtcz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-gg2pr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-l4hfh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-bxtcz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-bxtcz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-gg2pr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-gg2pr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-l4hfh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-340559 -- exec busybox-58667487b6-l4hfh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-340559 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-340559 -v=7 --alsologtostderr: (1m2.025130224s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-340559 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.076802626s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 status --output json -v=7 --alsologtostderr
E0120 18:25:39.824915  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:25:39.831772  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:25:39.843248  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:25:39.865287  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:25:39.909479  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:25:39.993929  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:25:40.156110  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:25:40.477478  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp testdata/cp-test.txt ha-340559:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559 "sudo cat /home/docker/cp-test.txt"
E0120 18:25:41.121985  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2261246904/001/cp-test_ha-340559.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559:/home/docker/cp-test.txt ha-340559-m02:/home/docker/cp-test_ha-340559_ha-340559-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559 "sudo cat /home/docker/cp-test.txt"
E0120 18:25:42.410297  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m02 "sudo cat /home/docker/cp-test_ha-340559_ha-340559-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559:/home/docker/cp-test.txt ha-340559-m03:/home/docker/cp-test_ha-340559_ha-340559-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m03 "sudo cat /home/docker/cp-test_ha-340559_ha-340559-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559:/home/docker/cp-test.txt ha-340559-m04:/home/docker/cp-test_ha-340559_ha-340559-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m04 "sudo cat /home/docker/cp-test_ha-340559_ha-340559-m04.txt"
E0120 18:25:44.971938  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp testdata/cp-test.txt ha-340559-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2261246904/001/cp-test_ha-340559-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m02:/home/docker/cp-test.txt ha-340559:/home/docker/cp-test_ha-340559-m02_ha-340559.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559 "sudo cat /home/docker/cp-test_ha-340559-m02_ha-340559.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m02:/home/docker/cp-test.txt ha-340559-m03:/home/docker/cp-test_ha-340559-m02_ha-340559-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m03 "sudo cat /home/docker/cp-test_ha-340559-m02_ha-340559-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m02:/home/docker/cp-test.txt ha-340559-m04:/home/docker/cp-test_ha-340559-m02_ha-340559-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m04 "sudo cat /home/docker/cp-test_ha-340559-m02_ha-340559-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp testdata/cp-test.txt ha-340559-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m03 "sudo cat /home/docker/cp-test.txt"
E0120 18:25:50.094010  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2261246904/001/cp-test_ha-340559-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m03:/home/docker/cp-test.txt ha-340559:/home/docker/cp-test_ha-340559-m03_ha-340559.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559 "sudo cat /home/docker/cp-test_ha-340559-m03_ha-340559.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m03:/home/docker/cp-test.txt ha-340559-m02:/home/docker/cp-test_ha-340559-m03_ha-340559-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m02 "sudo cat /home/docker/cp-test_ha-340559-m03_ha-340559-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m03:/home/docker/cp-test.txt ha-340559-m04:/home/docker/cp-test_ha-340559-m03_ha-340559-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m04 "sudo cat /home/docker/cp-test_ha-340559-m03_ha-340559-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp testdata/cp-test.txt ha-340559-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2261246904/001/cp-test_ha-340559-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m04:/home/docker/cp-test.txt ha-340559:/home/docker/cp-test_ha-340559-m04_ha-340559.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559 "sudo cat /home/docker/cp-test_ha-340559-m04_ha-340559.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m04:/home/docker/cp-test.txt ha-340559-m02:/home/docker/cp-test_ha-340559-m04_ha-340559-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m02 "sudo cat /home/docker/cp-test_ha-340559-m04_ha-340559-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 cp ha-340559-m04:/home/docker/cp-test.txt ha-340559-m03:/home/docker/cp-test_ha-340559-m04_ha-340559-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 ssh -n ha-340559-m03 "sudo cat /home/docker/cp-test_ha-340559-m04_ha-340559-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 node stop m02 -v=7 --alsologtostderr
E0120 18:26:00.335416  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-340559 node stop m02 -v=7 --alsologtostderr: (12.110661282s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr: exit status 7 (746.611134ms)

                                                
                                                
-- stdout --
	ha-340559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-340559-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-340559-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-340559-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 18:26:10.800282  350591 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:26:10.800517  350591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:26:10.800565  350591 out.go:358] Setting ErrFile to fd 2...
	I0120 18:26:10.800586  350591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:26:10.800922  350591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	I0120 18:26:10.801163  350591 out.go:352] Setting JSON to false
	I0120 18:26:10.801230  350591 mustload.go:65] Loading cluster: ha-340559
	I0120 18:26:10.801266  350591 notify.go:220] Checking for updates...
	I0120 18:26:10.802255  350591 config.go:182] Loaded profile config "ha-340559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:26:10.802314  350591 status.go:174] checking status of ha-340559 ...
	I0120 18:26:10.803376  350591 cli_runner.go:164] Run: docker container inspect ha-340559 --format={{.State.Status}}
	I0120 18:26:10.823623  350591 status.go:371] ha-340559 host status = "Running" (err=<nil>)
	I0120 18:26:10.823646  350591 host.go:66] Checking if "ha-340559" exists ...
	I0120 18:26:10.824032  350591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-340559
	I0120 18:26:10.860115  350591 host.go:66] Checking if "ha-340559" exists ...
	I0120 18:26:10.860429  350591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 18:26:10.860470  350591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-340559
	I0120 18:26:10.879571  350591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/ha-340559/id_rsa Username:docker}
	I0120 18:26:10.971184  350591 ssh_runner.go:195] Run: systemctl --version
	I0120 18:26:10.975723  350591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 18:26:10.987637  350591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:26:11.049928  350591 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:73 SystemTime:2025-01-20 18:26:11.040310067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:26:11.050528  350591 kubeconfig.go:125] found "ha-340559" server: "https://192.168.49.254:8443"
	I0120 18:26:11.050568  350591 api_server.go:166] Checking apiserver status ...
	I0120 18:26:11.050624  350591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 18:26:11.061884  350591 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	I0120 18:26:11.071443  350591 api_server.go:182] apiserver freezer: "3:freezer:/docker/e6b8420994c58867e98522e9399a784a37894e8c6b1c3522b4c7efbc125c24be/crio/crio-7757df80ab4fd9d34713db168f68fda29811dee3b688cdd93dbedb3b9d9ba9fa"
	I0120 18:26:11.071514  350591 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e6b8420994c58867e98522e9399a784a37894e8c6b1c3522b4c7efbc125c24be/crio/crio-7757df80ab4fd9d34713db168f68fda29811dee3b688cdd93dbedb3b9d9ba9fa/freezer.state
	I0120 18:26:11.081098  350591 api_server.go:204] freezer state: "THAWED"
	I0120 18:26:11.081124  350591 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 18:26:11.091105  350591 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 18:26:11.091141  350591 status.go:463] ha-340559 apiserver status = Running (err=<nil>)
	I0120 18:26:11.091152  350591 status.go:176] ha-340559 status: &{Name:ha-340559 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 18:26:11.091204  350591 status.go:174] checking status of ha-340559-m02 ...
	I0120 18:26:11.091573  350591 cli_runner.go:164] Run: docker container inspect ha-340559-m02 --format={{.State.Status}}
	I0120 18:26:11.112861  350591 status.go:371] ha-340559-m02 host status = "Stopped" (err=<nil>)
	I0120 18:26:11.112889  350591 status.go:384] host is not running, skipping remaining checks
	I0120 18:26:11.112897  350591 status.go:176] ha-340559-m02 status: &{Name:ha-340559-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 18:26:11.112919  350591 status.go:174] checking status of ha-340559-m03 ...
	I0120 18:26:11.113378  350591 cli_runner.go:164] Run: docker container inspect ha-340559-m03 --format={{.State.Status}}
	I0120 18:26:11.131591  350591 status.go:371] ha-340559-m03 host status = "Running" (err=<nil>)
	I0120 18:26:11.131621  350591 host.go:66] Checking if "ha-340559-m03" exists ...
	I0120 18:26:11.131933  350591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-340559-m03
	I0120 18:26:11.151521  350591 host.go:66] Checking if "ha-340559-m03" exists ...
	I0120 18:26:11.151856  350591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 18:26:11.151907  350591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-340559-m03
	I0120 18:26:11.170218  350591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/ha-340559-m03/id_rsa Username:docker}
	I0120 18:26:11.259276  350591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 18:26:11.271085  350591 kubeconfig.go:125] found "ha-340559" server: "https://192.168.49.254:8443"
	I0120 18:26:11.271114  350591 api_server.go:166] Checking apiserver status ...
	I0120 18:26:11.271156  350591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 18:26:11.281467  350591 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1319/cgroup
	I0120 18:26:11.291820  350591 api_server.go:182] apiserver freezer: "3:freezer:/docker/5c1f0899b8b4f106c777208532a6fb148193067bedc227ccaed0139bd0dadc63/crio/crio-c4b68848a970dc27bb6f92fc113bbefaaeea3a61ffbf7d0fced5ef4d097c5d94"
	I0120 18:26:11.291899  350591 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5c1f0899b8b4f106c777208532a6fb148193067bedc227ccaed0139bd0dadc63/crio/crio-c4b68848a970dc27bb6f92fc113bbefaaeea3a61ffbf7d0fced5ef4d097c5d94/freezer.state
	I0120 18:26:11.300727  350591 api_server.go:204] freezer state: "THAWED"
	I0120 18:26:11.300754  350591 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 18:26:11.309309  350591 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 18:26:11.309340  350591 status.go:463] ha-340559-m03 apiserver status = Running (err=<nil>)
	I0120 18:26:11.309351  350591 status.go:176] ha-340559-m03 status: &{Name:ha-340559-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 18:26:11.309378  350591 status.go:174] checking status of ha-340559-m04 ...
	I0120 18:26:11.309708  350591 cli_runner.go:164] Run: docker container inspect ha-340559-m04 --format={{.State.Status}}
	I0120 18:26:11.327299  350591 status.go:371] ha-340559-m04 host status = "Running" (err=<nil>)
	I0120 18:26:11.327331  350591 host.go:66] Checking if "ha-340559-m04" exists ...
	I0120 18:26:11.327634  350591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-340559-m04
	I0120 18:26:11.344934  350591 host.go:66] Checking if "ha-340559-m04" exists ...
	I0120 18:26:11.345241  350591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 18:26:11.345288  350591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-340559-m04
	I0120 18:26:11.363546  350591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/ha-340559-m04/id_rsa Username:docker}
	I0120 18:26:11.451744  350591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 18:26:11.465227  350591 status.go:176] ha-340559-m04 status: &{Name:ha-340559-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 node start m02 -v=7 --alsologtostderr
E0120 18:26:20.817735  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-340559 node start m02 -v=7 --alsologtostderr: (29.096050521s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr: (1.3929089s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.207141981s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (208.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-340559 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-340559 -v=7 --alsologtostderr
E0120 18:27:01.779129  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-340559 -v=7 --alsologtostderr: (37.13524738s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-340559 --wait=true -v=7 --alsologtostderr
E0120 18:27:53.582451  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:28:23.700642  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-340559 --wait=true -v=7 --alsologtostderr: (2m51.032788836s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-340559
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (208.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-340559 node delete m03 -v=7 --alsologtostderr: (11.630639218s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 stop -v=7 --alsologtostderr
E0120 18:30:39.825205  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-340559 stop -v=7 --alsologtostderr: (35.668455788s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr: exit status 7 (140.959137ms)

                                                
                                                
-- stdout --
	ha-340559
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-340559-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-340559-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 18:31:01.519582  365032 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:31:01.519773  365032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:31:01.519804  365032 out.go:358] Setting ErrFile to fd 2...
	I0120 18:31:01.519827  365032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:31:01.520101  365032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	I0120 18:31:01.520340  365032 out.go:352] Setting JSON to false
	I0120 18:31:01.520418  365032 mustload.go:65] Loading cluster: ha-340559
	I0120 18:31:01.520502  365032 notify.go:220] Checking for updates...
	I0120 18:31:01.522428  365032 config.go:182] Loaded profile config "ha-340559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:31:01.522608  365032 status.go:174] checking status of ha-340559 ...
	I0120 18:31:01.524347  365032 cli_runner.go:164] Run: docker container inspect ha-340559 --format={{.State.Status}}
	I0120 18:31:01.546509  365032 status.go:371] ha-340559 host status = "Stopped" (err=<nil>)
	I0120 18:31:01.546534  365032 status.go:384] host is not running, skipping remaining checks
	I0120 18:31:01.546541  365032 status.go:176] ha-340559 status: &{Name:ha-340559 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 18:31:01.546567  365032 status.go:174] checking status of ha-340559-m02 ...
	I0120 18:31:01.546899  365032 cli_runner.go:164] Run: docker container inspect ha-340559-m02 --format={{.State.Status}}
	I0120 18:31:01.579736  365032 status.go:371] ha-340559-m02 host status = "Stopped" (err=<nil>)
	I0120 18:31:01.579762  365032 status.go:384] host is not running, skipping remaining checks
	I0120 18:31:01.579770  365032 status.go:176] ha-340559-m02 status: &{Name:ha-340559-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 18:31:01.579791  365032 status.go:174] checking status of ha-340559-m04 ...
	I0120 18:31:01.580135  365032 cli_runner.go:164] Run: docker container inspect ha-340559-m04 --format={{.State.Status}}
	I0120 18:31:01.600256  365032 status.go:371] ha-340559-m04 host status = "Stopped" (err=<nil>)
	I0120 18:31:01.600292  365032 status.go:384] host is not running, skipping remaining checks
	I0120 18:31:01.600301  365032 status.go:176] ha-340559-m04 status: &{Name:ha-340559-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (104.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-340559 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0120 18:31:07.542236  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-340559 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m43.555716582s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (104.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-340559 --control-plane -v=7 --alsologtostderr
E0120 18:32:53.582233  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-340559 --control-plane -v=7 --alsologtostderr: (1m9.380090905s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-340559 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-357085 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0120 18:34:16.649615  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-357085 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m16.840002211s)
--- PASS: TestJSONOutput/start/Command (76.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-357085 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-357085 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-357085 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-357085 --output=json --user=testUser: (5.887725173s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-086019 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-086019 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.169148ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ded7fe3-e522-4d44-b270-7c054a00174c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-086019] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"65167916-68d3-4754-bbf2-59db22f825d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20109"}}
	{"specversion":"1.0","id":"ae6e7a1c-ad1c-4d8a-9ccb-b15d181700cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"03333398-c230-4040-ab5d-473792027670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig"}}
	{"specversion":"1.0","id":"41314f09-4876-4c04-b47a-018d6fcd0014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube"}}
	{"specversion":"1.0","id":"a758ab99-58c7-47e3-8012-123363e4c37d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"732e6ddd-d569-4142-b46f-789dd138590a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cdadcbc5-c072-4c6f-b26b-a8cb72714288","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-086019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-086019
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-743546 --network=
E0120 18:35:39.824621  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-743546 --network=: (39.956134611s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-743546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-743546
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-743546: (2.134181201s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.11s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-714867 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-714867 --network=bridge: (30.847845667s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-714867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-714867
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-714867: (1.971531667s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.84s)

                                                
                                    
x
+
TestKicExistingNetwork (31.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0120 18:36:50.735058  304547 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0120 18:36:50.751289  304547 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0120 18:36:50.751368  304547 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0120 18:36:50.751387  304547 cli_runner.go:164] Run: docker network inspect existing-network
W0120 18:36:50.767665  304547 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0120 18:36:50.767696  304547 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0120 18:36:50.767713  304547 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0120 18:36:50.767814  304547 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 18:36:50.784028  304547 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e224032efd4c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fb:9b:bb:4c} reservation:<nil>}
I0120 18:36:50.784391  304547 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cdb780}
I0120 18:36:50.784414  304547 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0120 18:36:50.784462  304547 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0120 18:36:50.852586  304547 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-609296 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-609296 --network=existing-network: (29.235023533s)
helpers_test.go:175: Cleaning up "existing-network-609296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-609296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-609296: (2.040579138s)
I0120 18:37:22.144284  304547 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.42s)

                                                
                                    
x
+
TestKicCustomSubnet (32.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-827653 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-827653 --subnet=192.168.60.0/24: (30.211580023s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-827653 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-827653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-827653
E0120 18:37:53.582320  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-827653: (2.149889027s)
--- PASS: TestKicCustomSubnet (32.38s)

                                                
                                    
x
+
TestKicStaticIP (32.04s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-416860 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-416860 --static-ip=192.168.200.200: (29.819120667s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-416860 ip
helpers_test.go:175: Cleaning up "static-ip-416860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-416860
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-416860: (2.074180544s)
--- PASS: TestKicStaticIP (32.04s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (74.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-868768 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-868768 --driver=docker  --container-runtime=crio: (34.572617975s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-871476 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-871476 --driver=docker  --container-runtime=crio: (33.983768284s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-868768
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-871476
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-871476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-871476
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-871476: (2.094827022s)
helpers_test.go:175: Cleaning up "first-868768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-868768
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-868768: (1.956477142s)
--- PASS: TestMinikubeProfile (74.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-895389 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-895389 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.396330218s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-895389 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-897342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-897342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.822235327s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-897342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-895389 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-895389 --alsologtostderr -v=5: (1.625933713s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-897342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-897342
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-897342: (1.200217285s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-897342
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-897342: (7.364868827s)
--- PASS: TestMountStart/serial/RestartStopped (8.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-897342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029553 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0120 18:40:39.824859  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-029553 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.664638247s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-029553 -- rollout status deployment/busybox: (4.811633168s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-4lsd9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-pk7x4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-4lsd9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-pk7x4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-4lsd9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-pk7x4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-4lsd9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-4lsd9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-pk7x4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029553 -- exec busybox-58667487b6-pk7x4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-029553 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-029553 -v 3 --alsologtostderr: (26.931250496s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-029553 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp testdata/cp-test.txt multinode-029553:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp multinode-029553:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile135543315/001/cp-test_multinode-029553.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp multinode-029553:/home/docker/cp-test.txt multinode-029553-m02:/home/docker/cp-test_multinode-029553_multinode-029553-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m02 "sudo cat /home/docker/cp-test_multinode-029553_multinode-029553-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp multinode-029553:/home/docker/cp-test.txt multinode-029553-m03:/home/docker/cp-test_multinode-029553_multinode-029553-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m03 "sudo cat /home/docker/cp-test_multinode-029553_multinode-029553-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp testdata/cp-test.txt multinode-029553-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp multinode-029553-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile135543315/001/cp-test_multinode-029553-m02.txt
E0120 18:42:02.904526  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp multinode-029553-m02:/home/docker/cp-test.txt multinode-029553:/home/docker/cp-test_multinode-029553-m02_multinode-029553.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553 "sudo cat /home/docker/cp-test_multinode-029553-m02_multinode-029553.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp multinode-029553-m02:/home/docker/cp-test.txt multinode-029553-m03:/home/docker/cp-test_multinode-029553-m02_multinode-029553-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m03 "sudo cat /home/docker/cp-test_multinode-029553-m02_multinode-029553-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp testdata/cp-test.txt multinode-029553-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp multinode-029553-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile135543315/001/cp-test_multinode-029553-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp multinode-029553-m03:/home/docker/cp-test.txt multinode-029553:/home/docker/cp-test_multinode-029553-m03_multinode-029553.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553 "sudo cat /home/docker/cp-test_multinode-029553-m03_multinode-029553.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 cp multinode-029553-m03:/home/docker/cp-test.txt multinode-029553-m02:/home/docker/cp-test_multinode-029553-m03_multinode-029553-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 ssh -n multinode-029553-m02 "sudo cat /home/docker/cp-test_multinode-029553-m03_multinode-029553-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-029553 node stop m03: (1.207392241s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-029553 status: exit status 7 (532.820117ms)

                                                
                                                
-- stdout --
	multinode-029553
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-029553-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-029553-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-029553 status --alsologtostderr: exit status 7 (514.555454ms)

                                                
                                                
-- stdout --
	multinode-029553
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-029553-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-029553-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 18:42:10.250193  418581 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:42:10.250345  418581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:42:10.250373  418581 out.go:358] Setting ErrFile to fd 2...
	I0120 18:42:10.250384  418581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:42:10.250760  418581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	I0120 18:42:10.251080  418581 out.go:352] Setting JSON to false
	I0120 18:42:10.251149  418581 mustload.go:65] Loading cluster: multinode-029553
	I0120 18:42:10.251911  418581 config.go:182] Loaded profile config "multinode-029553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:42:10.251938  418581 status.go:174] checking status of multinode-029553 ...
	I0120 18:42:10.252793  418581 cli_runner.go:164] Run: docker container inspect multinode-029553 --format={{.State.Status}}
	I0120 18:42:10.253387  418581 notify.go:220] Checking for updates...
	I0120 18:42:10.281943  418581 status.go:371] multinode-029553 host status = "Running" (err=<nil>)
	I0120 18:42:10.281979  418581 host.go:66] Checking if "multinode-029553" exists ...
	I0120 18:42:10.282333  418581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-029553
	I0120 18:42:10.315413  418581 host.go:66] Checking if "multinode-029553" exists ...
	I0120 18:42:10.315733  418581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 18:42:10.315784  418581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-029553
	I0120 18:42:10.334367  418581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33274 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/multinode-029553/id_rsa Username:docker}
	I0120 18:42:10.424115  418581 ssh_runner.go:195] Run: systemctl --version
	I0120 18:42:10.429046  418581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 18:42:10.440933  418581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:42:10.498252  418581 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:63 SystemTime:2025-01-20 18:42:10.488508227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:42:10.498865  418581 kubeconfig.go:125] found "multinode-029553" server: "https://192.168.67.2:8443"
	I0120 18:42:10.498905  418581 api_server.go:166] Checking apiserver status ...
	I0120 18:42:10.498966  418581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 18:42:10.509851  418581 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup
	I0120 18:42:10.519581  418581 api_server.go:182] apiserver freezer: "3:freezer:/docker/48511deff003b489e28d5d398d411731867955389a7040c764761e2ef8848763/crio/crio-b8f9bd3235f8c47a95f49cb7f7263ad39659ba97385c0a7383b75c129a2c33b9"
	I0120 18:42:10.519655  418581 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/48511deff003b489e28d5d398d411731867955389a7040c764761e2ef8848763/crio/crio-b8f9bd3235f8c47a95f49cb7f7263ad39659ba97385c0a7383b75c129a2c33b9/freezer.state
	I0120 18:42:10.528590  418581 api_server.go:204] freezer state: "THAWED"
	I0120 18:42:10.528619  418581 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0120 18:42:10.537208  418581 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0120 18:42:10.537244  418581 status.go:463] multinode-029553 apiserver status = Running (err=<nil>)
	I0120 18:42:10.537257  418581 status.go:176] multinode-029553 status: &{Name:multinode-029553 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 18:42:10.537275  418581 status.go:174] checking status of multinode-029553-m02 ...
	I0120 18:42:10.537590  418581 cli_runner.go:164] Run: docker container inspect multinode-029553-m02 --format={{.State.Status}}
	I0120 18:42:10.554185  418581 status.go:371] multinode-029553-m02 host status = "Running" (err=<nil>)
	I0120 18:42:10.554210  418581 host.go:66] Checking if "multinode-029553-m02" exists ...
	I0120 18:42:10.554518  418581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-029553-m02
	I0120 18:42:10.571287  418581 host.go:66] Checking if "multinode-029553-m02" exists ...
	I0120 18:42:10.571611  418581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 18:42:10.571659  418581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-029553-m02
	I0120 18:42:10.589010  418581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/20109-299163/.minikube/machines/multinode-029553-m02/id_rsa Username:docker}
	I0120 18:42:10.675449  418581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 18:42:10.687748  418581 status.go:176] multinode-029553-m02 status: &{Name:multinode-029553-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 18:42:10.687785  418581 status.go:174] checking status of multinode-029553-m03 ...
	I0120 18:42:10.688093  418581 cli_runner.go:164] Run: docker container inspect multinode-029553-m03 --format={{.State.Status}}
	I0120 18:42:10.705238  418581 status.go:371] multinode-029553-m03 host status = "Stopped" (err=<nil>)
	I0120 18:42:10.705261  418581 status.go:384] host is not running, skipping remaining checks
	I0120 18:42:10.705268  418581 status.go:176] multinode-029553-m03 status: &{Name:multinode-029553-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-029553 node start m03 -v=7 --alsologtostderr: (9.153916883s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (87.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-029553
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-029553
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-029553: (24.810565181s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029553 --wait=true -v=8 --alsologtostderr
E0120 18:42:53.582389  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-029553 --wait=true -v=8 --alsologtostderr: (1m2.73086126s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-029553
--- PASS: TestMultiNode/serial/RestartKeepsNodes (87.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-029553 node delete m03: (4.630253419s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-029553 stop: (23.601315802s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-029553 status: exit status 7 (95.683422ms)

                                                
                                                
-- stdout --
	multinode-029553
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-029553-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-029553 status --alsologtostderr: exit status 7 (99.009876ms)

                                                
                                                
-- stdout --
	multinode-029553
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-029553-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 18:44:17.373345  426054 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:44:17.373734  426054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:44:17.373768  426054 out.go:358] Setting ErrFile to fd 2...
	I0120 18:44:17.373821  426054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:44:17.374114  426054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	I0120 18:44:17.374357  426054 out.go:352] Setting JSON to false
	I0120 18:44:17.374421  426054 mustload.go:65] Loading cluster: multinode-029553
	I0120 18:44:17.374888  426054 config.go:182] Loaded profile config "multinode-029553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:44:17.374933  426054 status.go:174] checking status of multinode-029553 ...
	I0120 18:44:17.375579  426054 cli_runner.go:164] Run: docker container inspect multinode-029553 --format={{.State.Status}}
	I0120 18:44:17.376164  426054 notify.go:220] Checking for updates...
	I0120 18:44:17.394598  426054 status.go:371] multinode-029553 host status = "Stopped" (err=<nil>)
	I0120 18:44:17.394620  426054 status.go:384] host is not running, skipping remaining checks
	I0120 18:44:17.394626  426054 status.go:176] multinode-029553 status: &{Name:multinode-029553 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 18:44:17.394658  426054 status.go:174] checking status of multinode-029553-m02 ...
	I0120 18:44:17.394965  426054 cli_runner.go:164] Run: docker container inspect multinode-029553-m02 --format={{.State.Status}}
	I0120 18:44:17.415341  426054 status.go:371] multinode-029553-m02 host status = "Stopped" (err=<nil>)
	I0120 18:44:17.415363  426054 status.go:384] host is not running, skipping remaining checks
	I0120 18:44:17.415371  426054 status.go:176] multinode-029553-m02 status: &{Name:multinode-029553-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029553 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-029553 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (55.739688393s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029553 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-029553
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029553-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-029553-m02 --driver=docker  --container-runtime=crio: exit status 14 (99.273752ms)

                                                
                                                
-- stdout --
	* [multinode-029553-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-029553-m02' is duplicated with machine name 'multinode-029553-m02' in profile 'multinode-029553'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029553-m03 --driver=docker  --container-runtime=crio
E0120 18:45:39.828831  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-029553-m03 --driver=docker  --container-runtime=crio: (33.960599809s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-029553
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-029553: exit status 80 (345.113254ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-029553 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-029553-m03 already exists in multinode-029553-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-029553-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-029553-m03: (2.003346577s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.47s)

                                                
                                    
x
+
TestPreload (127.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-076135 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-076135 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m34.95232182s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-076135 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-076135 image pull gcr.io/k8s-minikube/busybox: (3.394465568s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-076135
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-076135: (5.829683408s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-076135 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0120 18:47:53.583020  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-076135 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.559574132s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-076135 image list
helpers_test.go:175: Cleaning up "test-preload-076135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-076135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-076135: (2.45689736s)
--- PASS: TestPreload (127.57s)

                                                
                                    
x
+
TestScheduledStopUnix (106.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-443472 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-443472 --memory=2048 --driver=docker  --container-runtime=crio: (29.391879061s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-443472 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-443472 -n scheduled-stop-443472
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-443472 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 18:48:31.892762  304547 retry.go:31] will retry after 60.085µs: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.893204  304547 retry.go:31] will retry after 76.706µs: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.894339  304547 retry.go:31] will retry after 251.764µs: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.895469  304547 retry.go:31] will retry after 187.228µs: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.896587  304547 retry.go:31] will retry after 379.08µs: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.897707  304547 retry.go:31] will retry after 431.796µs: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.898819  304547 retry.go:31] will retry after 959.66µs: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.899929  304547 retry.go:31] will retry after 1.841926ms: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.902068  304547 retry.go:31] will retry after 2.012481ms: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.904180  304547 retry.go:31] will retry after 4.497816ms: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.909341  304547 retry.go:31] will retry after 7.975953ms: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.917618  304547 retry.go:31] will retry after 11.743465ms: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.929880  304547 retry.go:31] will retry after 12.565136ms: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.943153  304547 retry.go:31] will retry after 27.372696ms: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
I0120 18:48:31.971373  304547 retry.go:31] will retry after 39.25536ms: open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/scheduled-stop-443472/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-443472 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-443472 -n scheduled-stop-443472
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-443472
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-443472 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-443472
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-443472: exit status 7 (72.257619ms)

                                                
                                                
-- stdout --
	scheduled-stop-443472
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-443472 -n scheduled-stop-443472
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-443472 -n scheduled-stop-443472: exit status 7 (70.272031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-443472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-443472
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-443472: (5.272437288s)
--- PASS: TestScheduledStopUnix (106.27s)

                                                
                                    
x
+
TestInsufficientStorage (10.56s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-932038 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-932038 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.090503856s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2a15a0e9-095c-4e6f-be1d-aea1d5d870f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-932038] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0496bfbe-c198-4c70-921e-652f80484738","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20109"}}
	{"specversion":"1.0","id":"0a81baac-84e5-4f5a-a564-8a130dafae24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"00be3f0e-7880-4ce2-82ce-0f003c377c61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig"}}
	{"specversion":"1.0","id":"fa42bd6d-f482-4c63-906b-6e863abb8626","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube"}}
	{"specversion":"1.0","id":"c627051a-2cc1-4845-8fa9-f0d2a06d2cee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2b754394-680b-4bac-b213-06d39e70b3a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f72116f8-0ba3-4858-a50f-f539773c732e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"dcf0e15b-a827-4652-8524-84c0f358444f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"421014d6-ada0-42b7-9feb-ee20c5a1b2bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b408b65-3e75-4a72-9dbd-889b14ea9540","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"caaf1f8a-a7f4-41f5-93c4-253f3aac5773","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-932038\" primary control-plane node in \"insufficient-storage-932038\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"56cb766d-7a50-407d-9bb8-b888f6ee85ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba43c572-70fc-42b8-9f02-7545f6336644","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"65898a75-583c-4828-844e-d2d4f7fe09cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-932038 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-932038 --output=json --layout=cluster: exit status 7 (272.351946ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-932038","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-932038","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 18:49:56.577809  443830 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-932038" does not appear in /home/jenkins/minikube-integration/20109-299163/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-932038 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-932038 --output=json --layout=cluster: exit status 7 (281.741015ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-932038","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-932038","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 18:49:56.860192  443892 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-932038" does not appear in /home/jenkins/minikube-integration/20109-299163/kubeconfig
	E0120 18:49:56.870877  443892 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/insufficient-storage-932038/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-932038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-932038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-932038: (1.915503869s)
--- PASS: TestInsufficientStorage (10.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3323542733 start -p running-upgrade-310314 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3323542733 start -p running-upgrade-310314 --memory=2200 --vm-driver=docker  --container-runtime=crio: (37.914126935s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-310314 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-310314 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.193381909s)
helpers_test.go:175: Cleaning up "running-upgrade-310314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-310314
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-310314: (2.753258313s)
--- PASS: TestRunningBinaryUpgrade (82.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (410.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-434479 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-434479 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m16.355730268s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-434479
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-434479: (11.643175852s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-434479 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-434479 status --format={{.Host}}: exit status 7 (114.647782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-434479 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-434479 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m41.383382419s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-434479 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-434479 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-434479 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (104.444476ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-434479] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-434479
	    minikube start -p kubernetes-upgrade-434479 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4344792 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-434479 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-434479 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-434479 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.138205108s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-434479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-434479
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-434479: (2.40405601s)
--- PASS: TestKubernetesUpgrade (410.24s)

                                                
                                    
x
+
TestMissingContainerUpgrade (160.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.973402687 start -p missing-upgrade-916392 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.973402687 start -p missing-upgrade-916392 --memory=2200 --driver=docker  --container-runtime=crio: (1m24.43499467s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-916392
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-916392: (10.478787032s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-916392
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-916392 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-916392 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m2.263927718s)
helpers_test.go:175: Cleaning up "missing-upgrade-916392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-916392
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-916392: (2.06748339s)
--- PASS: TestMissingContainerUpgrade (160.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-979209 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-979209 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (96.226542ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-979209] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-979209 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-979209 --driver=docker  --container-runtime=crio: (38.835551712s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-979209 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-979209 --no-kubernetes --driver=docker  --container-runtime=crio
E0120 18:50:39.825258  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-979209 --no-kubernetes --driver=docker  --container-runtime=crio: (5.26698353s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-979209 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-979209 status -o json: exit status 2 (291.086321ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-979209","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-979209
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-979209: (2.018554604s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-979209 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-979209 --no-kubernetes --driver=docker  --container-runtime=crio: (9.502927655s)
--- PASS: TestNoKubernetes/serial/Start (9.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-979209 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-979209 "sudo systemctl is-active --quiet service kubelet": exit status 1 (329.902496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
E0120 18:50:56.651644  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-979209
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-979209: (1.275104011s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-979209 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-979209 --driver=docker  --container-runtime=crio: (7.548451643s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-979209 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-979209 "sudo systemctl is-active --quiet service kubelet": exit status 1 (354.910865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (91.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.164658960 start -p stopped-upgrade-027569 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0120 18:52:53.582985  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.164658960 start -p stopped-upgrade-027569 --memory=2200 --vm-driver=docker  --container-runtime=crio: (41.722120127s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.164658960 -p stopped-upgrade-027569 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.164658960 -p stopped-upgrade-027569 stop: (2.642629126s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-027569 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-027569 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.347541564s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (91.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-027569
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-027569: (1.361858418s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.36s)

                                                
                                    
x
+
TestPause/serial/Start (77.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-714323 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0120 18:55:39.825559  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-714323 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m17.708344871s)
--- PASS: TestPause/serial/Start (77.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-714323 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-714323 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.963954254s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.00s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-714323 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-714323 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-714323 --output=json --layout=cluster: exit status 2 (356.14871ms)

                                                
                                                
-- stdout --
	{"Name":"pause-714323","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-714323","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-714323 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-714323 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.71s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-714323 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-714323 --alsologtostderr -v=5: (2.713623519s)
--- PASS: TestPause/serial/DeletePaused (2.71s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-714323
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-714323: exit status 1 (20.046993ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-714323: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-985898 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-985898 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (279.10171ms)

                                                
                                                
-- stdout --
	* [false-985898] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 18:58:03.783869  484638 out.go:345] Setting OutFile to fd 1 ...
	I0120 18:58:03.784082  484638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:58:03.784088  484638 out.go:358] Setting ErrFile to fd 2...
	I0120 18:58:03.784094  484638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 18:58:03.784354  484638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-299163/.minikube/bin
	I0120 18:58:03.784795  484638 out.go:352] Setting JSON to false
	I0120 18:58:03.785762  484638 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9628,"bootTime":1737389856,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0120 18:58:03.785860  484638 start.go:139] virtualization:  
	I0120 18:58:03.789768  484638 out.go:177] * [false-985898] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 18:58:03.792905  484638 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 18:58:03.792956  484638 notify.go:220] Checking for updates...
	I0120 18:58:03.799564  484638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 18:58:03.802874  484638 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-299163/kubeconfig
	I0120 18:58:03.805864  484638 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-299163/.minikube
	I0120 18:58:03.809461  484638 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 18:58:03.812658  484638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 18:58:03.816243  484638 config.go:182] Loaded profile config "force-systemd-flag-798085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 18:58:03.816401  484638 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 18:58:03.875235  484638 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 18:58:03.875356  484638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 18:58:03.950690  484638 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 18:58:03.941178384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 18:58:03.950805  484638 docker.go:318] overlay module found
	I0120 18:58:03.954215  484638 out.go:177] * Using the docker driver based on user configuration
	I0120 18:58:03.957261  484638 start.go:297] selected driver: docker
	I0120 18:58:03.957293  484638 start.go:901] validating driver "docker" against <nil>
	I0120 18:58:03.957309  484638 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 18:58:03.961254  484638 out.go:201] 
	W0120 18:58:03.964243  484638 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0120 18:58:03.967157  484638 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-985898 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-985898" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-985898

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-985898"

                                                
                                                
----------------------- debugLogs end: false-985898 [took: 4.320188768s] --------------------------------
helpers_test.go:175: Cleaning up "false-985898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-985898
--- PASS: TestNetworkPlugins/group/false (4.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (181.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-249862 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0120 19:00:39.825584  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-249862 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m1.614736571s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (181.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-244590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-244590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m14.615532655s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-249862 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1f3f72c6-a6dc-449f-b783-f39244796155] Pending
helpers_test.go:344: "busybox" [1f3f72c6-a6dc-449f-b783-f39244796155] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1f3f72c6-a6dc-449f-b783-f39244796155] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004264854s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-249862 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-249862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-249862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.120937271s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-249862 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-249862 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-249862 --alsologtostderr -v=3: (12.67261628s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-249862 -n old-k8s-version-249862
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-249862 -n old-k8s-version-249862: exit status 7 (101.161782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-249862 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (135.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-249862 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0120 19:02:53.582491  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-249862 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m15.585766389s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-249862 -n old-k8s-version-249862
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (135.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-244590 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a1836435-d5b1-4316-96cd-f6a77d42cf61] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a1836435-d5b1-4316-96cd-f6a77d42cf61] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00364295s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-244590 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-244590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-244590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.064046879s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-244590 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-244590 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-244590 --alsologtostderr -v=3: (12.34802033s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-244590 -n no-preload-244590
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-244590 -n no-preload-244590: exit status 7 (72.778084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-244590 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (288.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-244590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-244590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (4m48.140910586s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-244590 -n no-preload-244590
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (288.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mgpq6" [8e09ed5f-a13f-42a2-ae29-b8fbbb9e1710] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00432433s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mgpq6" [8e09ed5f-a13f-42a2-ae29-b8fbbb9e1710] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005996361s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-249862 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-249862 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-249862 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-249862 -n old-k8s-version-249862
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-249862 -n old-k8s-version-249862: exit status 2 (336.907299ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-249862 -n old-k8s-version-249862
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-249862 -n old-k8s-version-249862: exit status 2 (329.096104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-249862 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-249862 -n old-k8s-version-249862
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-249862 -n old-k8s-version-249862
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-477049 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0120 19:05:39.825370  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-477049 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (51.607042445s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-477049 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [03480d35-00e2-410b-a8bf-42262e2b3753] Pending
helpers_test.go:344: "busybox" [03480d35-00e2-410b-a8bf-42262e2b3753] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004702125s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-477049 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-477049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-477049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017969895s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-477049 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-477049 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-477049 --alsologtostderr -v=3: (11.940043109s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-477049 -n embed-certs-477049
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-477049 -n embed-certs-477049: exit status 7 (73.97042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-477049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (278.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-477049 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0120 19:07:26.359234  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:26.366423  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:26.377925  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:26.399408  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:26.440854  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:26.522455  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:26.684792  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:27.006554  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:27.648425  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:28.930460  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:31.491994  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:36.614351  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:36.653776  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:46.856317  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:07:53.582889  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:08:07.338243  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-477049 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (4m37.920014663s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-477049 -n embed-certs-477049
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (278.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-54zm7" [daa80ac4-2619-4a73-80cb-3e29ec74ba04] Running
E0120 19:08:48.299678  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004338299s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-54zm7" [daa80ac4-2619-4a73-80cb-3e29ec74ba04] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003800154s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-244590 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-244590 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-244590 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-244590 -n no-preload-244590
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-244590 -n no-preload-244590: exit status 2 (342.967393ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-244590 -n no-preload-244590
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-244590 -n no-preload-244590: exit status 2 (318.092052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-244590 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-244590 -n no-preload-244590
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-244590 -n no-preload-244590
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-506737 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-506737 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (53.674718428s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-506737 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bd70327b-254c-4cba-8f79-cf616f8f844d] Pending
helpers_test.go:344: "busybox" [bd70327b-254c-4cba-8f79-cf616f8f844d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bd70327b-254c-4cba-8f79-cf616f8f844d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004236536s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-506737 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-506737 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-506737 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037329671s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-506737 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-506737 --alsologtostderr -v=3
E0120 19:10:10.221113  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-506737 --alsologtostderr -v=3: (11.971644374s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-506737 -n default-k8s-diff-port-506737
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-506737 -n default-k8s-diff-port-506737: exit status 7 (71.219278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-506737 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-506737 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0120 19:10:39.825251  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-506737 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (5m0.389597789s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-506737 -n default-k8s-diff-port-506737
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-g7hr9" [6a261510-61c4-443d-afc4-2b7e6c62e037] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004220438s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-g7hr9" [6a261510-61c4-443d-afc4-2b7e6c62e037] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003731883s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-477049 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-477049 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-477049 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-477049 -n embed-certs-477049
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-477049 -n embed-certs-477049: exit status 2 (345.933487ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-477049 -n embed-certs-477049
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-477049 -n embed-certs-477049: exit status 2 (328.055818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-477049 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-477049 -n embed-certs-477049
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-477049 -n embed-certs-477049
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-450141 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-450141 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (36.534004259s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-450141 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-450141 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-450141 --alsologtostderr -v=3: (1.242912742s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450141 -n newest-cni-450141
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450141 -n newest-cni-450141: exit status 7 (70.007181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-450141 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-450141 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0120 19:12:26.359065  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-450141 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (17.395266806s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450141 -n newest-cni-450141
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-450141 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-450141 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450141 -n newest-cni-450141
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450141 -n newest-cni-450141: exit status 2 (347.140409ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-450141 -n newest-cni-450141
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-450141 -n newest-cni-450141: exit status 2 (324.223024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-450141 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450141 -n newest-cni-450141
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-450141 -n newest-cni-450141
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0120 19:12:53.582481  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:12:54.063421  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/old-k8s-version-249862/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:30.417474  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:30.423940  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:30.435384  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:30.456948  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:30.498383  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:30.579784  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:30.741292  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:31.062988  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:31.705023  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:32.986581  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:35.548710  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:40.670505  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:13:50.912192  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m17.28007423s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-985898 "pgrep -a kubelet"
I0120 19:13:58.115989  304547 config.go:182] Loaded profile config "auto-985898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-985898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hc25w" [9c38edd3-ba68-4d98-bc11-cca607b08658] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hc25w" [9c38edd3-ba68-4d98-bc11-cca607b08658] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003533196s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-985898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0120 19:14:52.355877  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.336038292s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-t7r7n" [15f4c7af-2f99-453f-9c18-c70a8cda424c] Running
E0120 19:15:22.908175  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003169047s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-t7r7n" [15f4c7af-2f99-453f-9c18-c70a8cda424c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00380486s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-506737 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-506737 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-506737 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-506737 -n default-k8s-diff-port-506737
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-506737 -n default-k8s-diff-port-506737: exit status 2 (339.29062ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-506737 -n default-k8s-diff-port-506737
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-506737 -n default-k8s-diff-port-506737: exit status 2 (343.870358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-506737 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-506737 -n default-k8s-diff-port-506737
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-506737 -n default-k8s-diff-port-506737
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0120 19:15:39.825242  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/functional-632700/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.578913397s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8sfx2" [b669b399-7987-4f87-8565-5b40e9de1424] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00427858s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-985898 "pgrep -a kubelet"
I0120 19:16:01.467092  304547 config.go:182] Loaded profile config "kindnet-985898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-985898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7lmd2" [6f92e6b6-b54e-48d5-a364-170efe895f4a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7lmd2" [6f92e6b6-b54e-48d5-a364-170efe895f4a] Running
E0120 19:16:14.277774  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.005715475s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-985898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.523866003s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jx6t2" [28065547-07f6-4c77-962f-092c898ed899] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006446215s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-985898 "pgrep -a kubelet"
I0120 19:16:54.304190  304547 config.go:182] Loaded profile config "calico-985898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-985898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-b9g7n" [13cc6f78-f544-4265-9dde-f9e96e28273c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-b9g7n" [13cc6f78-f544-4265-9dde-f9e96e28273c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003653944s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-985898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (44.114652903s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-985898 "pgrep -a kubelet"
I0120 19:17:43.598136  304547 config.go:182] Loaded profile config "custom-flannel-985898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-985898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7wzrh" [189c3f51-bd98-4f00-8f15-a42d81c08fae] Pending
helpers_test.go:344: "netcat-5d86dc444-7wzrh" [189c3f51-bd98-4f00-8f15-a42d81c08fae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7wzrh" [189c3f51-bd98-4f00-8f15-a42d81c08fae] Running
E0120 19:17:53.582517  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/addons-483552/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.003262938s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-985898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-985898 "pgrep -a kubelet"
I0120 19:18:18.345247  304547 config.go:182] Loaded profile config "enable-default-cni-985898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-985898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7crz5" [11f04724-b3d5-4b83-87aa-210030daa554] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7crz5" [11f04724-b3d5-4b83-87aa-210030daa554] Running
E0120 19:18:30.417493  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004532101s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.6354267s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-985898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0120 19:18:58.119221  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/no-preload-244590/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:18:58.389205  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:18:58.395921  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:18:58.407612  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:18:58.428968  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:18:58.470561  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:18:58.552644  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:18:58.714326  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:18:59.035923  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:18:59.678147  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:19:00.959502  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:19:03.520817  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:19:08.642648  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
E0120 19:19:18.884468  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-985898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m9.30083866s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mhkj2" [16444cf1-0464-4149-b3ee-38f39e02eef7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004672412s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-985898 "pgrep -a kubelet"
I0120 19:19:30.637217  304547 config.go:182] Loaded profile config "flannel-985898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-985898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4rczp" [d9b4f987-47e1-458e-8ea3-64441f6c0815] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4rczp" [d9b4f987-47e1-458e-8ea3-64441f6c0815] Running
E0120 19:19:39.366747  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/auto-985898/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.013269671s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-985898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-985898 "pgrep -a kubelet"
I0120 19:20:05.220945  304547 config.go:182] Loaded profile config "bridge-985898": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-985898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5xsn9" [f7dd9fd9-76eb-4361-953f-d8954bdc3b39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0120 19:20:05.827614  304547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/default-k8s-diff-port-506737/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-5xsn9" [f7dd9fd9-76eb-4361-953f-d8954bdc3b39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004399155s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-985898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-985898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-991041 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-991041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-991041
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-483552 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-010074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-010074
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-985898 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-985898" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-299163/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 18:58:00 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-798085
contexts:
- context:
cluster: force-systemd-flag-798085
extensions:
- extension:
last-update: Mon, 20 Jan 2025 18:58:00 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: force-systemd-flag-798085
name: force-systemd-flag-798085
current-context: force-systemd-flag-798085
kind: Config
preferences: {}
users:
- name: force-systemd-flag-798085
user:
client-certificate: /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/force-systemd-flag-798085/client.crt
client-key: /home/jenkins/minikube-integration/20109-299163/.minikube/profiles/force-systemd-flag-798085/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-985898

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-985898"

                                                
                                                
----------------------- debugLogs end: kubenet-985898 [took: 5.165751609s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-985898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-985898
--- SKIP: TestNetworkPlugins/group/kubenet (5.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-985898 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-985898" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-985898

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-985898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-985898"

                                                
                                                
----------------------- debugLogs end: cilium-985898 [took: 5.948075009s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-985898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-985898
--- SKIP: TestNetworkPlugins/group/cilium (6.13s)

                                                
                                    
Copied to clipboard