Test Report: Docker_Linux_crio_arm64 19888

                    
                      b240f9d77986126e9714444475c34e6cc49a474f:2024-12-10:37414
                    
                

Test fail (2/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 154.12
38 TestAddons/parallel/MetricsServer 290.06
x
+
TestAddons/parallel/Ingress (154.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-006125 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-006125 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-006125 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [072c5084-26b9-4eb6-8195-3cefc4703dd6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [072c5084-26b9-4eb6-8195-3cefc4703dd6] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003640134s
I1209 23:21:38.863225  297827 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006125 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.662732922s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-006125 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-006125
helpers_test.go:235: (dbg) docker inspect addons-006125:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f",
	        "Created": "2024-12-09T23:16:14.180400827Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299081,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-09T23:16:14.345821087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:51526bd7c0894c18bc1ef50650a0aaaea3bed24f70f72f77ac668ae72dfff137",
	        "ResolvConfPath": "/var/lib/docker/containers/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f/hosts",
	        "LogPath": "/var/lib/docker/containers/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f-json.log",
	        "Name": "/addons-006125",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-006125:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-006125",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cc75547a635283e6768fbb1e623e3138cd28178773a8346f7e6d48d8a039b090-init/diff:/var/lib/docker/overlay2/79ad247dbfb2a02f0d5606be3cc57168963c65e7190a6e757a2f7b99e29945ea/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc75547a635283e6768fbb1e623e3138cd28178773a8346f7e6d48d8a039b090/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc75547a635283e6768fbb1e623e3138cd28178773a8346f7e6d48d8a039b090/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc75547a635283e6768fbb1e623e3138cd28178773a8346f7e6d48d8a039b090/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-006125",
	                "Source": "/var/lib/docker/volumes/addons-006125/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-006125",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-006125",
	                "name.minikube.sigs.k8s.io": "addons-006125",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f0e0c2bf1546dabab93914126310ef3846108b37a10c0264cfd9463d38783b7c",
	            "SandboxKey": "/var/run/docker/netns/f0e0c2bf1546",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-006125": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "471830a571f18c34227cfa076927e612c43f763390187d57c10a2502667e21d9",
	                    "EndpointID": "86b08696a9497ef60a3b05258c4344f178bcfaf1b34440d662fedee2914f7283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-006125",
	                        "1c0e3041e6a1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-006125 -n addons-006125
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-006125 logs -n 25: (1.575387306s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-328809                                                                     | download-only-328809   | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	| delete  | -p download-only-702821                                                                     | download-only-702821   | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	| start   | --download-only -p                                                                          | download-docker-257686 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | download-docker-257686                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-257686                                                                   | download-docker-257686 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-505134   | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | binary-mirror-505134                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33693                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-505134                                                                     | binary-mirror-505134   | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	| addons  | disable dashboard -p                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | addons-006125                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | addons-006125                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-006125 --wait=true                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:19 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-006125 ip                                                                            | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | -p addons-006125                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-006125 ssh cat                                                                       | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | /opt/local-path-provisioner/pvc-2e1b855f-45ef-4582-80d6-f5a3741f0811_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:21 UTC | 09 Dec 24 23:21 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:21 UTC | 09 Dec 24 23:21 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:21 UTC | 09 Dec 24 23:21 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-006125 ssh curl -s                                                                   | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:21 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-006125 ip                                                                            | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:23 UTC | 09 Dec 24 23:23 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:15:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:15:48.009072  298586 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:15:48.010065  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:15:48.010149  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:15:48.010174  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:15:48.010521  298586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1209 23:15:48.011241  298586 out.go:352] Setting JSON to false
	I1209 23:15:48.012450  298586 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7089,"bootTime":1733779059,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 23:15:48.012591  298586 start.go:139] virtualization:  
	I1209 23:15:48.015679  298586 out.go:177] * [addons-006125] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 23:15:48.018874  298586 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:15:48.019012  298586 notify.go:220] Checking for updates...
	I1209 23:15:48.023671  298586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:15:48.026520  298586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	I1209 23:15:48.028737  298586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	I1209 23:15:48.030918  298586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 23:15:48.033156  298586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:15:48.036177  298586 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:15:48.067551  298586 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:15:48.067686  298586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:15:48.127149  298586 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-09 23:15:48.117752325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:15:48.127266  298586 docker.go:318] overlay module found
	I1209 23:15:48.129900  298586 out.go:177] * Using the docker driver based on user configuration
	I1209 23:15:48.131758  298586 start.go:297] selected driver: docker
	I1209 23:15:48.131782  298586 start.go:901] validating driver "docker" against <nil>
	I1209 23:15:48.131798  298586 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:15:48.132596  298586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:15:48.192793  298586 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-09 23:15:48.184046246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:15:48.193024  298586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:15:48.193254  298586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:15:48.195355  298586 out.go:177] * Using Docker driver with root privileges
	I1209 23:15:48.197128  298586 cni.go:84] Creating CNI manager for ""
	I1209 23:15:48.197205  298586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:15:48.197221  298586 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:15:48.197302  298586 start.go:340] cluster config:
	{Name:addons-006125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:15:48.200787  298586 out.go:177] * Starting "addons-006125" primary control-plane node in "addons-006125" cluster
	I1209 23:15:48.202575  298586 cache.go:121] Beginning downloading kic base image for docker with crio
	I1209 23:15:48.204723  298586 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:15:48.206573  298586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:15:48.206637  298586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1209 23:15:48.206651  298586 cache.go:56] Caching tarball of preloaded images
	I1209 23:15:48.206671  298586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:15:48.206756  298586 preload.go:172] Found /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 23:15:48.206767  298586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:15:48.207164  298586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/config.json ...
	I1209 23:15:48.207198  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/config.json: {Name:mk210deb0807675a1ac7bb384b35a79a82b38cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:15:48.223239  298586 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:15:48.223382  298586 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:15:48.223406  298586 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1209 23:15:48.223416  298586 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1209 23:15:48.223425  298586 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1209 23:15:48.223436  298586 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1209 23:16:06.778276  298586 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1209 23:16:06.778315  298586 cache.go:194] Successfully downloaded all kic artifacts
	I1209 23:16:06.778362  298586 start.go:360] acquireMachinesLock for addons-006125: {Name:mk95fb822276b933d828a80e13ca25416178bd49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:16:06.778494  298586 start.go:364] duration metric: took 108.022µs to acquireMachinesLock for "addons-006125"
	I1209 23:16:06.778526  298586 start.go:93] Provisioning new machine with config: &{Name:addons-006125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:16:06.778620  298586 start.go:125] createHost starting for "" (driver="docker")
	I1209 23:16:06.781156  298586 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1209 23:16:06.781426  298586 start.go:159] libmachine.API.Create for "addons-006125" (driver="docker")
	I1209 23:16:06.781463  298586 client.go:168] LocalClient.Create starting
	I1209 23:16:06.781610  298586 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem
	I1209 23:16:07.078943  298586 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/cert.pem
	I1209 23:16:07.671170  298586 cli_runner.go:164] Run: docker network inspect addons-006125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 23:16:07.687510  298586 cli_runner.go:211] docker network inspect addons-006125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 23:16:07.687609  298586 network_create.go:284] running [docker network inspect addons-006125] to gather additional debugging logs...
	I1209 23:16:07.687632  298586 cli_runner.go:164] Run: docker network inspect addons-006125
	W1209 23:16:07.702664  298586 cli_runner.go:211] docker network inspect addons-006125 returned with exit code 1
	I1209 23:16:07.702701  298586 network_create.go:287] error running [docker network inspect addons-006125]: docker network inspect addons-006125: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-006125 not found
	I1209 23:16:07.702732  298586 network_create.go:289] output of [docker network inspect addons-006125]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-006125 not found
	
	** /stderr **
	I1209 23:16:07.702831  298586 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 23:16:07.719914  298586 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e28c40}
	I1209 23:16:07.719952  298586 network_create.go:124] attempt to create docker network addons-006125 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 23:16:07.720017  298586 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-006125 addons-006125
	I1209 23:16:07.791907  298586 network_create.go:108] docker network addons-006125 192.168.49.0/24 created
	I1209 23:16:07.791941  298586 kic.go:121] calculated static IP "192.168.49.2" for the "addons-006125" container
	I1209 23:16:07.792031  298586 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 23:16:07.808797  298586 cli_runner.go:164] Run: docker volume create addons-006125 --label name.minikube.sigs.k8s.io=addons-006125 --label created_by.minikube.sigs.k8s.io=true
	I1209 23:16:07.824934  298586 oci.go:103] Successfully created a docker volume addons-006125
	I1209 23:16:07.825066  298586 cli_runner.go:164] Run: docker run --rm --name addons-006125-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006125 --entrypoint /usr/bin/test -v addons-006125:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1209 23:16:09.933833  298586 cli_runner.go:217] Completed: docker run --rm --name addons-006125-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006125 --entrypoint /usr/bin/test -v addons-006125:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (2.108716106s)
	I1209 23:16:09.933864  298586 oci.go:107] Successfully prepared a docker volume addons-006125
	I1209 23:16:09.933897  298586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:16:09.933917  298586 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 23:16:09.933991  298586 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006125:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 23:16:14.112748  298586 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006125:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.178695792s)
	I1209 23:16:14.112785  298586 kic.go:203] duration metric: took 4.178864582s to extract preloaded images to volume ...
	W1209 23:16:14.112952  298586 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 23:16:14.113069  298586 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 23:16:14.164785  298586 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-006125 --name addons-006125 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006125 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-006125 --network addons-006125 --ip 192.168.49.2 --volume addons-006125:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1209 23:16:14.525652  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Running}}
	I1209 23:16:14.545990  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:14.567828  298586 cli_runner.go:164] Run: docker exec addons-006125 stat /var/lib/dpkg/alternatives/iptables
	I1209 23:16:14.620520  298586 oci.go:144] the created container "addons-006125" has a running status.
	I1209 23:16:14.620554  298586 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa...
	I1209 23:16:14.919642  298586 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 23:16:14.952954  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:14.981192  298586 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 23:16:14.981211  298586 kic_runner.go:114] Args: [docker exec --privileged addons-006125 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 23:16:15.093238  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:15.118080  298586 machine.go:93] provisionDockerMachine start ...
	I1209 23:16:15.118182  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:15.142294  298586 main.go:141] libmachine: Using SSH client type: native
	I1209 23:16:15.142602  298586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1209 23:16:15.142612  298586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:16:15.145192  298586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42088->127.0.0.1:33138: read: connection reset by peer
	I1209 23:16:18.267158  298586 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006125
	
	I1209 23:16:18.267193  298586 ubuntu.go:169] provisioning hostname "addons-006125"
	I1209 23:16:18.267299  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:18.285418  298586 main.go:141] libmachine: Using SSH client type: native
	I1209 23:16:18.285693  298586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1209 23:16:18.285710  298586 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-006125 && echo "addons-006125" | sudo tee /etc/hostname
	I1209 23:16:18.419316  298586 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006125
	
	I1209 23:16:18.419400  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:18.443085  298586 main.go:141] libmachine: Using SSH client type: native
	I1209 23:16:18.443385  298586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1209 23:16:18.443410  298586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-006125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-006125/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-006125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:16:18.567270  298586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:16:18.567308  298586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19888-292449/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-292449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-292449/.minikube}
	I1209 23:16:18.567332  298586 ubuntu.go:177] setting up certificates
	I1209 23:16:18.567343  298586 provision.go:84] configureAuth start
	I1209 23:16:18.567418  298586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006125
	I1209 23:16:18.585473  298586 provision.go:143] copyHostCerts
	I1209 23:16:18.585596  298586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-292449/.minikube/ca.pem (1082 bytes)
	I1209 23:16:18.585727  298586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-292449/.minikube/cert.pem (1123 bytes)
	I1209 23:16:18.585788  298586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-292449/.minikube/key.pem (1679 bytes)
	I1209 23:16:18.585838  298586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-292449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca-key.pem org=jenkins.addons-006125 san=[127.0.0.1 192.168.49.2 addons-006125 localhost minikube]
	I1209 23:16:19.846315  298586 provision.go:177] copyRemoteCerts
	I1209 23:16:19.846385  298586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:16:19.846431  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:19.863559  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:19.952104  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:16:19.976917  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 23:16:20.013687  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:16:20.073489  298586 provision.go:87] duration metric: took 1.506117982s to configureAuth
	I1209 23:16:20.073610  298586 ubuntu.go:193] setting minikube options for container-runtime
	I1209 23:16:20.073828  298586 config.go:182] Loaded profile config "addons-006125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:16:20.073957  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.091513  298586 main.go:141] libmachine: Using SSH client type: native
	I1209 23:16:20.091803  298586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1209 23:16:20.091827  298586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:16:20.316949  298586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:16:20.316970  298586 machine.go:96] duration metric: took 5.198867058s to provisionDockerMachine
	I1209 23:16:20.316981  298586 client.go:171] duration metric: took 13.535508825s to LocalClient.Create
	I1209 23:16:20.316994  298586 start.go:167] duration metric: took 13.535574229s to libmachine.API.Create "addons-006125"
	I1209 23:16:20.317003  298586 start.go:293] postStartSetup for "addons-006125" (driver="docker")
	I1209 23:16:20.317014  298586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:16:20.317078  298586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:16:20.317123  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.335574  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:20.424728  298586 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:16:20.428241  298586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 23:16:20.428281  298586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1209 23:16:20.428293  298586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1209 23:16:20.428300  298586 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1209 23:16:20.428312  298586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-292449/.minikube/addons for local assets ...
	I1209 23:16:20.428389  298586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-292449/.minikube/files for local assets ...
	I1209 23:16:20.428418  298586 start.go:296] duration metric: took 111.409427ms for postStartSetup
	I1209 23:16:20.428738  298586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006125
	I1209 23:16:20.447222  298586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/config.json ...
	I1209 23:16:20.447529  298586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:16:20.447584  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.464864  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:20.552134  298586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 23:16:20.556856  298586 start.go:128] duration metric: took 13.778218127s to createHost
	I1209 23:16:20.556932  298586 start.go:83] releasing machines lock for "addons-006125", held for 13.778424826s
	I1209 23:16:20.557021  298586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006125
	I1209 23:16:20.575011  298586 ssh_runner.go:195] Run: cat /version.json
	I1209 23:16:20.575079  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.575273  298586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:16:20.575331  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.598643  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:20.603521  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:20.820094  298586 ssh_runner.go:195] Run: systemctl --version
	I1209 23:16:20.824866  298586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:16:20.968875  298586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 23:16:20.973371  298586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:16:20.996987  298586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1209 23:16:20.997086  298586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:16:21.042853  298586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1209 23:16:21.042880  298586 start.go:495] detecting cgroup driver to use...
	I1209 23:16:21.042915  298586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 23:16:21.042981  298586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:16:21.062106  298586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:16:21.075258  298586 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:16:21.075347  298586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:16:21.090284  298586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:16:21.106633  298586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:16:21.198993  298586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:16:21.300289  298586 docker.go:233] disabling docker service ...
	I1209 23:16:21.300451  298586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:16:21.324000  298586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:16:21.336566  298586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:16:21.422343  298586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:16:21.512489  298586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:16:21.525048  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:16:21.541869  298586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:16:21.541975  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.551857  298586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:16:21.551990  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.562929  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.574082  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.584854  298586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:16:21.594274  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.604352  298586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.621161  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.631237  298586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:16:21.640622  298586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:16:21.649486  298586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:16:21.729351  298586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:16:21.845168  298586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:16:21.845257  298586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:16:21.848961  298586 start.go:563] Will wait 60s for crictl version
	I1209 23:16:21.849031  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:16:21.852597  298586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:16:21.892477  298586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1209 23:16:21.892600  298586 ssh_runner.go:195] Run: crio --version
	I1209 23:16:21.930523  298586 ssh_runner.go:195] Run: crio --version
	I1209 23:16:21.970129  298586 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1209 23:16:21.971837  298586 cli_runner.go:164] Run: docker network inspect addons-006125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 23:16:21.992162  298586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 23:16:21.995767  298586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:16:22.010671  298586 kubeadm.go:883] updating cluster {Name:addons-006125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:16:22.010814  298586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:16:22.010878  298586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:16:22.091855  298586 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:16:22.091882  298586 crio.go:433] Images already preloaded, skipping extraction
	I1209 23:16:22.091942  298586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:16:22.132163  298586 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:16:22.132188  298586 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:16:22.132197  298586 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1209 23:16:22.132331  298586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-006125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:16:22.132421  298586 ssh_runner.go:195] Run: crio config
	I1209 23:16:22.180839  298586 cni.go:84] Creating CNI manager for ""
	I1209 23:16:22.180915  298586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:16:22.180942  298586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:16:22.180988  298586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-006125 NodeName:addons-006125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:16:22.181144  298586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-006125"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:16:22.181227  298586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:16:22.191686  298586 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:16:22.191803  298586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:16:22.200942  298586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 23:16:22.219486  298586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:16:22.238393  298586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1209 23:16:22.256923  298586 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 23:16:22.260540  298586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:16:22.271531  298586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:16:22.363606  298586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:16:22.378171  298586 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125 for IP: 192.168.49.2
	I1209 23:16:22.378243  298586 certs.go:194] generating shared ca certs ...
	I1209 23:16:22.378275  298586 certs.go:226] acquiring lock for ca certs: {Name:mk059c8f83fb5636d205d77749a6b58de9d7eb72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:22.378921  298586 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-292449/.minikube/ca.key
	I1209 23:16:22.861873  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/ca.crt ...
	I1209 23:16:22.861908  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/ca.crt: {Name:mk9860f7e41edc46298549c904da9356bdddbd82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:22.862558  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/ca.key ...
	I1209 23:16:22.862578  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/ca.key: {Name:mkce41db02665dc8406951414731c623a2fb1b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:22.863100  298586 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.key
	I1209 23:16:23.321386  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.crt ...
	I1209 23:16:23.321417  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.crt: {Name:mk7020e675ca5ad8d2493e6af48756e7c7cfef31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.321623  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.key ...
	I1209 23:16:23.321637  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.key: {Name:mk7fbb3af690707396062e3bf118a4633aabef95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.322360  298586 certs.go:256] generating profile certs ...
	I1209 23:16:23.322427  298586 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.key
	I1209 23:16:23.322446  298586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt with IP's: []
	I1209 23:16:23.622763  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt ...
	I1209 23:16:23.622794  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: {Name:mke74c1e48f47a63def2eed44915a9384d731e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.622978  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.key ...
	I1209 23:16:23.622991  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.key: {Name:mka0b6333c9e6837ad55b080b9aed1423480853d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.623511  298586 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key.cba0d7a3
	I1209 23:16:23.623536  298586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt.cba0d7a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 23:16:23.941703  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt.cba0d7a3 ...
	I1209 23:16:23.941734  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt.cba0d7a3: {Name:mk38a62048871c794f8fcb0fcaaa1a91632d5521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.942475  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key.cba0d7a3 ...
	I1209 23:16:23.942494  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key.cba0d7a3: {Name:mk66bab5632aee0aafbb8d6e409315b562bb1280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.943041  298586 certs.go:381] copying /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt.cba0d7a3 -> /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt
	I1209 23:16:23.943165  298586 certs.go:385] copying /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key.cba0d7a3 -> /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key
	I1209 23:16:23.943228  298586 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.key
	I1209 23:16:23.943251  298586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.crt with IP's: []
	I1209 23:16:24.639551  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.crt ...
	I1209 23:16:24.639591  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.crt: {Name:mk7aa9d546bcaf8bf1626d6d750cfff08df1915a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:24.640520  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.key ...
	I1209 23:16:24.640541  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.key: {Name:mk0253c42106ba746fda4716cda32f8c74383558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:24.640773  298586 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:16:24.640820  298586 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:16:24.640850  298586 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:16:24.640878  298586 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/key.pem (1679 bytes)
	I1209 23:16:24.641499  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:16:24.672587  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 23:16:24.696891  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:16:24.722073  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:16:24.746888  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 23:16:24.770829  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:16:24.795602  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:16:24.820070  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:16:24.844891  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:16:24.869484  298586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:16:24.888892  298586 ssh_runner.go:195] Run: openssl version
	I1209 23:16:24.894586  298586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:16:24.904699  298586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:16:24.908359  298586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:16 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:16:24.908466  298586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:16:24.915519  298586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:16:24.925466  298586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:16:24.929107  298586 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 23:16:24.929166  298586 kubeadm.go:392] StartCluster: {Name:addons-006125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:16:24.929258  298586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:16:24.929318  298586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:16:24.966868  298586 cri.go:89] found id: ""
	I1209 23:16:24.966939  298586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:16:24.976140  298586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:16:24.985234  298586 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1209 23:16:24.985333  298586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:16:24.995052  298586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:16:24.995076  298586 kubeadm.go:157] found existing configuration files:
	
	I1209 23:16:24.995204  298586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:16:25.008963  298586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:16:25.009192  298586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:16:25.020118  298586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:16:25.030685  298586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:16:25.030770  298586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:16:25.040407  298586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:16:25.050454  298586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:16:25.050528  298586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:16:25.059708  298586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:16:25.069137  298586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:16:25.069247  298586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:16:25.079042  298586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 23:16:25.145648  298586 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1072-aws\n", err: exit status 1
	I1209 23:16:25.207898  298586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:16:43.877652  298586 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 23:16:43.877711  298586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:16:43.877798  298586 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1209 23:16:43.877854  298586 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1072-aws
	I1209 23:16:43.877889  298586 kubeadm.go:310] OS: Linux
	I1209 23:16:43.877934  298586 kubeadm.go:310] CGROUPS_CPU: enabled
	I1209 23:16:43.877981  298586 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1209 23:16:43.878028  298586 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1209 23:16:43.878076  298586 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1209 23:16:43.878124  298586 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1209 23:16:43.878173  298586 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1209 23:16:43.878218  298586 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1209 23:16:43.878266  298586 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1209 23:16:43.878313  298586 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1209 23:16:43.878384  298586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:16:43.878478  298586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:16:43.878567  298586 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 23:16:43.878629  298586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:16:43.880713  298586 out.go:235]   - Generating certificates and keys ...
	I1209 23:16:43.880824  298586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:16:43.880897  298586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:16:43.880973  298586 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 23:16:43.881064  298586 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 23:16:43.881141  298586 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 23:16:43.881195  298586 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 23:16:43.881252  298586 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 23:16:43.881368  298586 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-006125 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 23:16:43.881421  298586 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 23:16:43.881540  298586 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-006125 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 23:16:43.881605  298586 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 23:16:43.881667  298586 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 23:16:43.881711  298586 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 23:16:43.881766  298586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:16:43.881816  298586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:16:43.881872  298586 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 23:16:43.881930  298586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:16:43.881992  298586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:16:43.882046  298586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:16:43.882125  298586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:16:43.882191  298586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:16:43.884151  298586 out.go:235]   - Booting up control plane ...
	I1209 23:16:43.884259  298586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:16:43.884345  298586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:16:43.884421  298586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:16:43.884533  298586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:16:43.884627  298586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:16:43.884672  298586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:16:43.884811  298586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 23:16:43.884925  298586 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 23:16:43.884994  298586 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001675956s
	I1209 23:16:43.885076  298586 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 23:16:43.885140  298586 kubeadm.go:310] [api-check] The API server is healthy after 7.001486066s
	I1209 23:16:43.885256  298586 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 23:16:43.885393  298586 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 23:16:43.885459  298586 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 23:16:43.885659  298586 kubeadm.go:310] [mark-control-plane] Marking the node addons-006125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 23:16:43.885722  298586 kubeadm.go:310] [bootstrap-token] Using token: n4ct29.x0cho1mo7j2uiwhv
	I1209 23:16:43.887626  298586 out.go:235]   - Configuring RBAC rules ...
	I1209 23:16:43.887853  298586 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 23:16:43.887978  298586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 23:16:43.888130  298586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 23:16:43.888277  298586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 23:16:43.888398  298586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 23:16:43.888486  298586 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 23:16:43.888604  298586 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 23:16:43.888652  298586 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 23:16:43.888704  298586 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 23:16:43.888712  298586 kubeadm.go:310] 
	I1209 23:16:43.888772  298586 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 23:16:43.888780  298586 kubeadm.go:310] 
	I1209 23:16:43.888856  298586 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 23:16:43.888863  298586 kubeadm.go:310] 
	I1209 23:16:43.888888  298586 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 23:16:43.888950  298586 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 23:16:43.889005  298586 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 23:16:43.889013  298586 kubeadm.go:310] 
	I1209 23:16:43.889067  298586 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 23:16:43.889074  298586 kubeadm.go:310] 
	I1209 23:16:43.889121  298586 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 23:16:43.889129  298586 kubeadm.go:310] 
	I1209 23:16:43.889182  298586 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 23:16:43.889261  298586 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 23:16:43.889332  298586 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 23:16:43.889340  298586 kubeadm.go:310] 
	I1209 23:16:43.889423  298586 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 23:16:43.889502  298586 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 23:16:43.889510  298586 kubeadm.go:310] 
	I1209 23:16:43.889598  298586 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n4ct29.x0cho1mo7j2uiwhv \
	I1209 23:16:43.889704  298586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ca6f268d2720bcc3dcc63add200af5349fb88d3412781ec48479c46aca637593 \
	I1209 23:16:43.889727  298586 kubeadm.go:310] 	--control-plane 
	I1209 23:16:43.889731  298586 kubeadm.go:310] 
	I1209 23:16:43.889819  298586 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 23:16:43.889826  298586 kubeadm.go:310] 
	I1209 23:16:43.889908  298586 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n4ct29.x0cho1mo7j2uiwhv \
	I1209 23:16:43.890024  298586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ca6f268d2720bcc3dcc63add200af5349fb88d3412781ec48479c46aca637593 
	I1209 23:16:43.890037  298586 cni.go:84] Creating CNI manager for ""
	I1209 23:16:43.890046  298586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:16:43.893241  298586 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 23:16:43.895219  298586 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 23:16:43.899614  298586 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 23:16:43.899652  298586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 23:16:43.918103  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 23:16:44.212717  298586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:16:44.212872  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:44.212937  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-006125 minikube.k8s.io/updated_at=2024_12_09T23_16_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=addons-006125 minikube.k8s.io/primary=true
	I1209 23:16:44.392783  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:44.392856  298586 ops.go:34] apiserver oom_adj: -16
	I1209 23:16:44.893531  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:45.393302  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:45.892832  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:46.392887  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:46.892881  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:47.392974  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:47.892891  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:48.000287  298586 kubeadm.go:1113] duration metric: took 3.787466049s to wait for elevateKubeSystemPrivileges
	I1209 23:16:48.000320  298586 kubeadm.go:394] duration metric: took 23.071159301s to StartCluster
	I1209 23:16:48.000340  298586 settings.go:142] acquiring lock: {Name:mk5e8ade0aba5028c542a17cc3ac26b2fce0612a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:48.000573  298586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-292449/kubeconfig
	I1209 23:16:48.001013  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/kubeconfig: {Name:mkb1748c465c9240b5ac61d2f2426a68610afd6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:48.001257  298586 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:16:48.001488  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 23:16:48.001830  298586 config.go:182] Loaded profile config "addons-006125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:16:48.001869  298586 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 23:16:48.001968  298586 addons.go:69] Setting yakd=true in profile "addons-006125"
	I1209 23:16:48.001983  298586 addons.go:234] Setting addon yakd=true in "addons-006125"
	I1209 23:16:48.002011  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.002685  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.003152  298586 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-006125"
	I1209 23:16:48.003177  298586 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-006125"
	I1209 23:16:48.003221  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.003661  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.008635  298586 addons.go:69] Setting cloud-spanner=true in profile "addons-006125"
	I1209 23:16:48.011043  298586 addons.go:234] Setting addon cloud-spanner=true in "addons-006125"
	I1209 23:16:48.011221  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.011832  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.012138  298586 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-006125"
	I1209 23:16:48.012238  298586 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-006125"
	I1209 23:16:48.012328  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.012911  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.025123  298586 addons.go:69] Setting default-storageclass=true in profile "addons-006125"
	I1209 23:16:48.025175  298586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-006125"
	I1209 23:16:48.025566  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.027914  298586 addons.go:69] Setting gcp-auth=true in profile "addons-006125"
	I1209 23:16:48.028070  298586 mustload.go:65] Loading cluster: addons-006125
	I1209 23:16:48.028545  298586 config.go:182] Loaded profile config "addons-006125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:16:48.029215  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.031587  298586 out.go:177] * Verifying Kubernetes components...
	I1209 23:16:48.044054  298586 addons.go:69] Setting ingress=true in profile "addons-006125"
	I1209 23:16:48.044214  298586 addons.go:234] Setting addon ingress=true in "addons-006125"
	I1209 23:16:48.044298  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.044969  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.059358  298586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:16:48.059679  298586 addons.go:69] Setting volcano=true in profile "addons-006125"
	I1209 23:16:48.059700  298586 addons.go:234] Setting addon volcano=true in "addons-006125"
	I1209 23:16:48.059714  298586 addons.go:69] Setting volumesnapshots=true in profile "addons-006125"
	I1209 23:16:48.059768  298586 addons.go:234] Setting addon volumesnapshots=true in "addons-006125"
	I1209 23:16:48.059836  298586 addons.go:69] Setting ingress-dns=true in profile "addons-006125"
	I1209 23:16:48.059859  298586 addons.go:234] Setting addon ingress-dns=true in "addons-006125"
	I1209 23:16:48.059887  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.060088  298586 addons.go:69] Setting inspektor-gadget=true in profile "addons-006125"
	I1209 23:16:48.060105  298586 addons.go:234] Setting addon inspektor-gadget=true in "addons-006125"
	I1209 23:16:48.060129  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.061455  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.078317  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.078743  298586 addons.go:69] Setting metrics-server=true in profile "addons-006125"
	I1209 23:16:48.078766  298586 addons.go:234] Setting addon metrics-server=true in "addons-006125"
	I1209 23:16:48.078801  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.079288  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.089845  298586 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-006125"
	I1209 23:16:48.089889  298586 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-006125"
	I1209 23:16:48.089933  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.090406  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.109716  298586 addons.go:69] Setting registry=true in profile "addons-006125"
	I1209 23:16:48.109749  298586 addons.go:234] Setting addon registry=true in "addons-006125"
	I1209 23:16:48.109791  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.113710  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.124912  298586 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 23:16:48.129068  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 23:16:48.129100  298586 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 23:16:48.129174  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.129410  298586 addons.go:69] Setting storage-provisioner=true in profile "addons-006125"
	I1209 23:16:48.129443  298586 addons.go:234] Setting addon storage-provisioner=true in "addons-006125"
	I1209 23:16:48.129476  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.129947  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.143216  298586 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-006125"
	I1209 23:16:48.143254  298586 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-006125"
	I1209 23:16:48.143614  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.059842  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.171846  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.059801  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.192101  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.213789  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 23:16:48.218380  298586 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 23:16:48.244636  298586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 23:16:48.248305  298586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:16:48.253145  298586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:16:48.253311  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 23:16:48.256550  298586 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:16:48.269863  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 23:16:48.269968  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.256775  298586 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:16:48.281954  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 23:16:48.282050  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.286886  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 23:16:48.301467  298586 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 23:16:48.307057  298586 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 23:16:48.307080  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 23:16:48.307171  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.269446  298586 addons.go:234] Setting addon default-storageclass=true in "addons-006125"
	I1209 23:16:48.314698  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.315238  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.322970  298586 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 23:16:48.326190  298586 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:16:48.326228  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 23:16:48.326350  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.269551  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.357174  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 23:16:48.367389  298586 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 23:16:48.369612  298586 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 23:16:48.372013  298586 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 23:16:48.372039  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 23:16:48.372108  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.378591  298586 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1209 23:16:48.390681  298586 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:16:48.390765  298586 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:16:48.390882  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.395235  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 23:16:48.397432  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 23:16:48.399553  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 23:16:48.401649  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 23:16:48.403447  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 23:16:48.403472  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 23:16:48.403545  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.407183  298586 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 23:16:48.409959  298586 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 23:16:48.410030  298586 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 23:16:48.410143  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.421277  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.422694  298586 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-006125"
	I1209 23:16:48.422735  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.423148  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.433714  298586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:16:48.434443  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 23:16:48.434675  298586 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 23:16:48.435484  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 23:16:48.435755  298586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:16:48.437270  298586 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:16:48.437288  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 23:16:48.437348  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.450243  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 23:16:48.450274  298586 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 23:16:48.450339  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	W1209 23:16:48.473128  298586 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 23:16:48.473559  298586 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:16:48.473575  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:16:48.473638  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.523774  298586 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:16:48.523794  298586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:16:48.523859  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.524206  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.528724  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.559486  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.605570  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.634183  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.646370  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.667210  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.668134  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.670439  298586 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 23:16:48.671580  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.672011  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.675387  298586 out.go:177]   - Using image docker.io/busybox:stable
	I1209 23:16:48.677870  298586 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:16:48.677890  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 23:16:48.677958  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.678382  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	W1209 23:16:48.679470  298586 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 23:16:48.679496  298586 retry.go:31] will retry after 182.427079ms: ssh: handshake failed: EOF
	I1209 23:16:48.716118  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	W1209 23:16:48.718791  298586 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 23:16:48.718823  298586 retry.go:31] will retry after 364.837186ms: ssh: handshake failed: EOF
	I1209 23:16:48.720558  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	W1209 23:16:48.721468  298586 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 23:16:48.721490  298586 retry.go:31] will retry after 307.067348ms: ssh: handshake failed: EOF
	I1209 23:16:48.802728  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 23:16:48.802807  298586 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 23:16:48.926330  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 23:16:48.951487  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:16:48.954461  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 23:16:48.954489  298586 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 23:16:49.065311  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:16:49.085821  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:16:49.100369  298586 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:16:49.100394  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 23:16:49.103123  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:16:49.120519  298586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:16:49.120544  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 23:16:49.129871  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:16:49.133255  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 23:16:49.133280  298586 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 23:16:49.137225  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 23:16:49.137249  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 23:16:49.138797  298586 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 23:16:49.138823  298586 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 23:16:49.275860  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:16:49.282274  298586 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:16:49.282299  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 23:16:49.286019  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 23:16:49.286045  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 23:16:49.289769  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:16:49.289795  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 23:16:49.323398  298586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:16:49.323424  298586 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:16:49.334773  298586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 23:16:49.334799  298586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 23:16:49.378843  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:16:49.401024  298586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 23:16:49.401051  298586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 23:16:49.474402  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:16:49.478757  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 23:16:49.478785  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 23:16:49.480708  298586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:16:49.480734  298586 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:16:49.500730  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:16:49.545561  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:16:49.549094  298586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 23:16:49.549118  298586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 23:16:49.621694  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:16:49.625084  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 23:16:49.625111  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 23:16:49.728718  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 23:16:49.728745  298586 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 23:16:49.792850  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 23:16:49.792878  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 23:16:49.877574  298586 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:16:49.877599  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 23:16:49.945027  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:16:49.950768  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 23:16:49.950795  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 23:16:50.001709  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 23:16:50.001741  298586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 23:16:50.134350  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 23:16:50.134376  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 23:16:50.249470  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 23:16:50.249493  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 23:16:50.331566  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:16:50.331595  298586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 23:16:50.397821  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:16:50.544717  298586 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.109201235s)
	I1209 23:16:50.544750  298586 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1209 23:16:50.545766  298586 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.10999125s)
	I1209 23:16:50.546511  298586 node_ready.go:35] waiting up to 6m0s for node "addons-006125" to be "Ready" ...
	I1209 23:16:52.032878  298586 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-006125" context rescaled to 1 replicas
	I1209 23:16:52.722785  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:16:52.906228  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.979810589s)
	I1209 23:16:52.906335  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.954826633s)
	I1209 23:16:55.052186  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:16:55.366469  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.263311676s)
	I1209 23:16:55.366541  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.236648785s)
	I1209 23:16:55.366613  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.090729695s)
	I1209 23:16:55.366635  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.987770043s)
	I1209 23:16:55.366831  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.89240078s)
	I1209 23:16:55.366844  298586 addons.go:475] Verifying addon registry=true in "addons-006125"
	I1209 23:16:55.367014  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.28058186s)
	I1209 23:16:55.367149  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.866388668s)
	I1209 23:16:55.367525  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.302178437s)
	I1209 23:16:55.367585  298586 addons.go:475] Verifying addon ingress=true in "addons-006125"
	I1209 23:16:55.367667  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.422612904s)
	W1209 23:16:55.367709  298586 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:16:55.367734  298586 retry.go:31] will retry after 139.542213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:16:55.367536  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.821946713s)
	I1209 23:16:55.367595  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.745870501s)
	I1209 23:16:55.367966  298586 addons.go:475] Verifying addon metrics-server=true in "addons-006125"
	I1209 23:16:55.372469  298586 out.go:177] * Verifying registry addon...
	I1209 23:16:55.372652  298586 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-006125 service yakd-dashboard -n yakd-dashboard
	
	I1209 23:16:55.372721  298586 out.go:177] * Verifying ingress addon...
	I1209 23:16:55.375408  298586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 23:16:55.375941  298586 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 23:16:55.396092  298586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:16:55.396174  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:55.401547  298586 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 23:16:55.401572  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 23:16:55.403881  298586 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1209 23:16:55.507737  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:16:55.801003  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.403131404s)
	I1209 23:16:55.801101  298586 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-006125"
	I1209 23:16:55.803999  298586 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 23:16:55.806832  298586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 23:16:55.829781  298586 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:16:55.829862  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:55.885344  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:55.886431  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:56.311937  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:56.413144  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:56.414286  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:56.811467  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:56.879336  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:56.879822  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:57.311344  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:57.411983  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:57.412562  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:57.550233  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:16:57.811480  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:57.879342  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:57.880330  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:58.311734  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:58.332372  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.824536578s)
	I1209 23:16:58.411749  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:58.412473  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:58.811578  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:58.879799  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:58.880979  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:58.968069  298586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 23:16:58.968176  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:58.987651  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:59.093927  298586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 23:16:59.113132  298586 addons.go:234] Setting addon gcp-auth=true in "addons-006125"
	I1209 23:16:59.113187  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:59.113686  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:59.131538  298586 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 23:16:59.131597  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:59.148822  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:59.253208  298586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:16:59.255221  298586 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 23:16:59.257080  298586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 23:16:59.257100  298586 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 23:16:59.275815  298586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 23:16:59.275843  298586 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 23:16:59.294908  298586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:16:59.294934  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 23:16:59.311376  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:59.317949  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:16:59.380905  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:59.384654  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:59.550921  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:16:59.829569  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:59.851541  298586 addons.go:475] Verifying addon gcp-auth=true in "addons-006125"
	I1209 23:16:59.855305  298586 out.go:177] * Verifying gcp-auth addon...
	I1209 23:16:59.859556  298586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 23:16:59.866178  298586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 23:16:59.866265  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:16:59.964330  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:59.965287  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:00.344222  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:00.382147  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:00.396814  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:00.406737  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:00.814718  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:00.863674  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:00.880548  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:00.880810  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:01.315893  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:01.364110  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:01.379938  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:01.380718  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:01.552997  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:17:01.811465  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:01.864202  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:01.880708  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:01.881741  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:02.312176  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:02.364000  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:02.379394  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:02.380126  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:02.811717  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:02.863744  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:02.880470  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:02.881685  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:03.310921  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:03.365059  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:03.379795  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:03.380838  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:03.812865  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:03.864582  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:03.879005  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:03.880040  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:04.050291  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:17:04.312028  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:04.364045  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:04.380408  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:04.381459  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:04.812985  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:04.864261  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:04.881121  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:04.881185  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:05.311411  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:05.364631  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:05.380016  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:05.380964  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:05.811952  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:05.863967  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:05.879445  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:05.880274  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:06.089423  298586 node_ready.go:49] node "addons-006125" has status "Ready":"True"
	I1209 23:17:06.089509  298586 node_ready.go:38] duration metric: took 15.542971108s for node "addons-006125" to be "Ready" ...
	I1209 23:17:06.107753  298586 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:17:06.183008  298586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ps5kv" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:06.422810  298586 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:17:06.422895  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:06.424640  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:06.425328  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:06.425871  298586 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:17:06.425910  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:06.846248  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:06.871846  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:06.888433  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:06.888929  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:07.317097  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:07.415103  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:07.415888  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:07.421525  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:07.816282  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:07.863494  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:07.882177  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:07.884576  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:08.217281  298586 pod_ready.go:103] pod "coredns-7c65d6cfc9-ps5kv" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:08.314192  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:08.365161  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:08.386503  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:08.386848  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:08.690766  298586 pod_ready.go:93] pod "coredns-7c65d6cfc9-ps5kv" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.690795  298586 pod_ready.go:82] duration metric: took 2.507745046s for pod "coredns-7c65d6cfc9-ps5kv" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.690824  298586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.698898  298586 pod_ready.go:93] pod "etcd-addons-006125" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.698927  298586 pod_ready.go:82] duration metric: took 8.093412ms for pod "etcd-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.698943  298586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.708154  298586 pod_ready.go:93] pod "kube-apiserver-addons-006125" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.708182  298586 pod_ready.go:82] duration metric: took 9.228161ms for pod "kube-apiserver-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.708194  298586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.718690  298586 pod_ready.go:93] pod "kube-controller-manager-addons-006125" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.718716  298586 pod_ready.go:82] duration metric: took 10.514388ms for pod "kube-controller-manager-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.718734  298586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sp7fm" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.726434  298586 pod_ready.go:93] pod "kube-proxy-sp7fm" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.726460  298586 pod_ready.go:82] duration metric: took 7.717899ms for pod "kube-proxy-sp7fm" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.726473  298586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.813152  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:08.863966  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:08.882444  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:08.883358  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:09.087914  298586 pod_ready.go:93] pod "kube-scheduler-addons-006125" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:09.087942  298586 pod_ready.go:82] duration metric: took 361.459747ms for pod "kube-scheduler-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:09.087956  298586 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:09.311927  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:09.363885  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:09.380475  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:09.381644  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:09.812577  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:09.863379  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:09.880565  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:09.881356  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:10.312404  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:10.363689  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:10.380923  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:10.381417  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:10.813486  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:10.864194  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:10.881471  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:10.882503  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:11.096221  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:11.313045  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:11.366381  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:11.416954  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:11.417983  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:11.812597  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:11.863436  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:11.879681  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:11.880786  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:12.312391  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:12.363456  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:12.379078  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:12.382221  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:12.812570  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:12.863842  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:12.883692  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:12.887178  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:13.096489  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:13.314029  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:13.363610  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:13.380114  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:13.380692  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:13.815670  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:13.912422  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:13.913069  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:13.913927  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:14.312856  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:14.364026  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:14.380020  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:14.380896  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:14.812267  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:14.863605  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:14.880198  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:14.880432  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:15.097361  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:15.311825  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:15.363263  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:15.380040  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:15.380705  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:15.811425  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:15.863770  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:15.880133  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:15.880554  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:16.312809  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:16.364595  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:16.383580  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:16.384234  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:16.812265  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:16.864444  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:16.880799  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:16.883888  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:17.312490  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:17.364298  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:17.381777  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:17.382517  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:17.622143  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:17.812224  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:17.864363  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:17.882495  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:17.883905  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:18.316017  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:18.365160  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:18.381494  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:18.385643  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:18.814386  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:18.863732  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:18.880350  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:18.881458  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:19.313741  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:19.363431  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:19.381452  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:19.381829  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:19.811600  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:19.864273  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:19.885564  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:19.888042  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:20.102498  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:20.312480  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:20.364634  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:20.381135  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:20.382824  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:20.811756  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:20.864560  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:20.882332  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:20.883774  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:21.316134  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:21.365044  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:21.383392  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:21.384884  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:21.811771  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:21.863656  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:21.881659  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:21.882674  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:22.311464  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:22.363462  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:22.379244  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:22.382218  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:22.595857  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:22.813552  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:22.864902  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:22.880203  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:22.881233  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:23.314448  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:23.368036  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:23.380228  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:23.382417  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:23.812722  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:23.864067  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:23.881749  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:23.883028  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:24.312369  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:24.412934  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:24.412977  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:24.413424  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:24.616158  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:24.812309  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:24.863503  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:24.879095  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:24.881150  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:25.312852  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:25.363195  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:25.380255  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:25.382627  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:25.812070  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:25.865060  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:25.880790  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:25.882270  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:26.312937  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:26.366020  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:26.380498  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:26.380722  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:26.812102  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:26.863832  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:26.882111  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:26.883622  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:27.097843  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:27.312168  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:27.363068  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:27.379735  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:27.380944  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:27.813102  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:27.863102  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:27.880816  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:27.881001  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:28.312052  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:28.364383  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:28.381949  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:28.382532  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:28.812215  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:28.863774  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:28.879961  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:28.880967  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:29.312147  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:29.412221  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:29.412774  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:29.414507  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:29.622361  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:29.811586  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:29.863730  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:29.882122  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:29.883979  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:30.312368  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:30.363776  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:30.380546  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:30.383477  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:30.812716  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:30.865024  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:30.884388  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:30.885960  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:31.330289  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:31.372828  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:31.412338  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:31.412598  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:31.811247  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:31.864354  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:31.883488  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:31.884586  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:32.096086  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:32.312512  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:32.363868  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:32.380076  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:32.381233  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:32.812760  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:32.863332  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:32.880556  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:32.880799  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:33.311929  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:33.363312  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:33.379648  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:33.381617  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:33.812506  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:33.863515  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:33.881197  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:33.881510  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:34.096697  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:34.313777  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:34.365097  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:34.385236  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:34.387053  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:34.814112  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:34.863631  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:34.880430  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:34.881638  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:35.316682  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:35.365726  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:35.382704  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:35.384097  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:35.817793  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:35.867734  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:35.885453  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:35.889607  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:36.101866  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:36.312490  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:36.364673  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:36.383494  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:36.387063  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:36.812350  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:36.885013  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:36.886838  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:36.888539  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:37.314077  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:37.368964  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:37.381191  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:37.384587  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:37.827127  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:37.863657  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:37.881327  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:37.882560  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:38.105162  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:38.313863  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:38.370014  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:38.404708  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:38.406854  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:38.813624  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:38.863853  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:38.914098  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:38.915409  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:39.314685  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:39.415001  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:39.415648  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:39.416477  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:39.813158  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:39.863687  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:39.883872  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:39.887524  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:40.312624  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:40.364111  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:40.382837  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:40.384931  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:40.595468  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:40.812527  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:40.866301  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:40.884568  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:40.885946  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:41.312834  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:41.363626  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:41.382471  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:41.383307  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:41.812076  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:41.864109  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:41.882143  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:41.882615  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:42.314945  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:42.366375  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:42.382313  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:42.384550  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:42.597304  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:42.813166  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:42.863940  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:42.881947  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:42.883289  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:43.311720  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:43.364605  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:43.380989  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:43.384648  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:43.813378  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:43.863679  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:43.879666  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:43.880331  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:44.312420  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:44.369140  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:44.384343  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:44.385674  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:44.813218  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:44.864558  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:44.881474  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:44.882979  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:45.095781  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:45.312855  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:45.365728  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:45.389349  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:45.398566  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:45.812103  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:45.864942  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:45.884618  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:45.885647  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:46.314968  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:46.363580  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:46.385505  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:46.387027  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:46.812988  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:46.863930  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:46.881953  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:46.883536  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:47.099987  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:47.316578  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:47.363213  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:47.382082  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:47.383700  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:47.811816  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:47.863688  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:47.880426  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:47.881395  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:48.316002  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:48.364599  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:48.379342  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:48.381218  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:48.812564  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:48.863563  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:48.886276  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:48.887622  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:49.101211  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:49.315998  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:49.364411  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:49.390088  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:49.390979  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:49.813388  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:49.865590  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:49.883510  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:49.885222  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:50.312725  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:50.363277  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:50.382203  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:50.383629  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:50.812560  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:50.863096  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:50.892716  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:50.901021  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:51.315653  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:51.363392  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:51.380507  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:51.383951  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:51.595563  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:51.814118  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:51.863655  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:51.880775  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:51.882430  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:52.317074  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:52.365951  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:52.392122  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:52.396532  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:52.817551  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:52.863331  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:52.880833  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:52.882287  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:53.317095  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:53.363770  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:53.379589  298586 kapi.go:107] duration metric: took 58.00417968s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 23:17:53.380974  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:53.812327  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:53.863561  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:53.881084  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:54.095405  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:54.312060  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:54.363372  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:54.380300  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:54.811888  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:54.863785  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:54.880450  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:55.311982  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:55.362853  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:55.380431  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:55.812121  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:55.869676  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:55.884092  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:56.097706  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:56.313318  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:56.364439  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:56.380881  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:56.812636  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:56.864119  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:56.881048  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:57.313259  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:57.363617  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:57.383005  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:57.813028  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:57.863767  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:57.882984  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:58.125547  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:58.312084  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:58.363907  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:58.380377  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:58.812568  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:58.863019  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:58.880246  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:59.319529  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:59.418505  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:59.419494  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:59.813734  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:59.863177  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:59.880680  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:00.143495  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:00.333246  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:00.382434  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:00.398573  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:00.811784  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:00.863601  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:00.880978  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:01.313533  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:01.415309  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:01.416420  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:01.813224  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:01.871805  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:01.882373  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:02.313427  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:02.363852  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:02.380819  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:02.594258  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:02.814829  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:02.864251  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:02.883551  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:03.315529  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:03.364101  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:03.380829  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:03.812594  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:03.864699  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:03.881418  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:04.312769  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:04.362875  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:04.396542  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:04.595674  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:04.812294  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:04.863920  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:04.880131  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:05.317265  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:05.364410  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:05.381936  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:05.813148  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:05.865226  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:05.881709  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:06.324240  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:06.363825  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:06.380047  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:06.597218  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:06.825112  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:06.910817  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:06.912067  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:07.313921  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:07.412728  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:07.414009  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:07.812513  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:07.863713  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:07.881008  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:08.311422  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:08.363342  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:08.380815  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:08.812263  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:08.863134  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:08.880327  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:09.094263  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:09.312277  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:09.363459  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:09.380489  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:09.814632  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:09.864071  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:09.880625  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:10.312592  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:10.364963  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:10.380558  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:10.811206  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:10.865729  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:10.882767  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:11.095565  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:11.312244  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:11.411518  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:11.412935  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:11.812388  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:11.863690  298586 kapi.go:107] duration metric: took 1m12.004144578s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 23:18:11.866366  298586 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-006125 cluster.
	I1209 23:18:11.868769  298586 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 23:18:11.871093  298586 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 23:18:11.880263  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:12.311680  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:12.381782  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:12.812033  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:12.880563  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:13.097204  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:13.326123  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:13.416180  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:13.812139  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:13.881172  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:14.315394  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:14.419570  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:14.813220  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:14.881690  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:15.097848  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:15.312842  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:15.380758  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:15.819321  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:15.881221  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:16.313528  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:16.381432  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:16.813899  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:16.881148  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:17.312563  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:17.414240  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:17.595553  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:17.813730  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:17.881637  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:18.312993  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:18.382196  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:18.813842  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:18.880788  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:19.313231  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:19.382665  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:19.811742  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:19.914082  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:20.111624  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:20.313222  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:20.380078  298586 kapi.go:107] duration metric: took 1m25.004131993s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 23:18:20.811761  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:21.312057  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:21.811938  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:22.311858  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:22.594589  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:22.812256  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:23.312509  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:23.812860  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:24.312416  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:24.594841  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:24.812100  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:25.312839  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:25.812268  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:26.312025  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:26.814334  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:27.094841  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:27.311887  298586 kapi.go:107] duration metric: took 1m31.505053736s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 23:18:27.314827  298586 out.go:177] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, inspektor-gadget, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1209 23:18:27.317437  298586 addons.go:510] duration metric: took 1m39.315557983s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner inspektor-gadget ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1209 23:18:29.094982  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:31.095350  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:33.595150  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:36.095070  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:38.594342  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:40.594691  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:42.595539  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:45.111480  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:47.094867  298586 pod_ready.go:93] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"True"
	I1209 23:18:47.094894  298586 pod_ready.go:82] duration metric: took 1m38.006928905s for pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace to be "Ready" ...
	I1209 23:18:47.094908  298586 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nqsf9" in "kube-system" namespace to be "Ready" ...
	I1209 23:18:47.100404  298586 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nqsf9" in "kube-system" namespace has status "Ready":"True"
	I1209 23:18:47.100435  298586 pod_ready.go:82] duration metric: took 5.518194ms for pod "nvidia-device-plugin-daemonset-nqsf9" in "kube-system" namespace to be "Ready" ...
	I1209 23:18:47.100458  298586 pod_ready.go:39] duration metric: took 1m40.992639961s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:18:47.100473  298586 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:18:47.101048  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:18:47.101141  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:18:47.164828  298586 cri.go:89] found id: "28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:18:47.164859  298586 cri.go:89] found id: ""
	I1209 23:18:47.164867  298586 logs.go:282] 1 containers: [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4]
	I1209 23:18:47.164946  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.169793  298586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:18:47.169872  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:18:47.225891  298586 cri.go:89] found id: "bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:18:47.225915  298586 cri.go:89] found id: ""
	I1209 23:18:47.225924  298586 logs.go:282] 1 containers: [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3]
	I1209 23:18:47.226012  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.229838  298586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:18:47.229956  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:18:47.283100  298586 cri.go:89] found id: "60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:18:47.283163  298586 cri.go:89] found id: ""
	I1209 23:18:47.283173  298586 logs.go:282] 1 containers: [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a]
	I1209 23:18:47.283235  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.287525  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:18:47.287650  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:18:47.328787  298586 cri.go:89] found id: "9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:18:47.328809  298586 cri.go:89] found id: ""
	I1209 23:18:47.328817  298586 logs.go:282] 1 containers: [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55]
	I1209 23:18:47.328878  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.332874  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:18:47.332949  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:18:47.378532  298586 cri.go:89] found id: "07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:18:47.378605  298586 cri.go:89] found id: ""
	I1209 23:18:47.378618  298586 logs.go:282] 1 containers: [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143]
	I1209 23:18:47.378818  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.382909  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:18:47.383028  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:18:47.427889  298586 cri.go:89] found id: "ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:18:47.427914  298586 cri.go:89] found id: ""
	I1209 23:18:47.427923  298586 logs.go:282] 1 containers: [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7]
	I1209 23:18:47.427990  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.432484  298586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:18:47.432565  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:18:47.475022  298586 cri.go:89] found id: "46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:18:47.475046  298586 cri.go:89] found id: ""
	I1209 23:18:47.475061  298586 logs.go:282] 1 containers: [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c]
	I1209 23:18:47.475175  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.481147  298586 logs.go:123] Gathering logs for dmesg ...
	I1209 23:18:47.481223  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:18:47.499467  298586 logs.go:123] Gathering logs for etcd [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3] ...
	I1209 23:18:47.499496  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:18:47.550999  298586 logs.go:123] Gathering logs for coredns [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a] ...
	I1209 23:18:47.551032  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:18:47.627242  298586 logs.go:123] Gathering logs for kube-proxy [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143] ...
	I1209 23:18:47.627277  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:18:47.680809  298586 logs.go:123] Gathering logs for kube-controller-manager [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7] ...
	I1209 23:18:47.680840  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:18:47.752097  298586 logs.go:123] Gathering logs for kindnet [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c] ...
	I1209 23:18:47.752135  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:18:47.790826  298586 logs.go:123] Gathering logs for kubelet ...
	I1209 23:18:47.790858  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 23:18:47.868101  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967476    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:47.868380  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:47.868574  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:47.868803  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:47.870079  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:18:47.870312  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:18:47.909575  298586 logs.go:123] Gathering logs for kube-apiserver [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4] ...
	I1209 23:18:47.909612  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:18:47.987180  298586 logs.go:123] Gathering logs for kube-scheduler [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55] ...
	I1209 23:18:47.987300  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:18:48.057088  298586 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:18:48.057140  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:18:48.157233  298586 logs.go:123] Gathering logs for container status ...
	I1209 23:18:48.157274  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:18:48.212076  298586 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:18:48.212108  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:18:48.422529  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:18:48.422557  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 23:18:48.422609  298586 out.go:270] X Problems detected in kubelet:
	W1209 23:18:48.422628  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:48.422635  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:48.422646  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:48.422653  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:18:48.422664  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:18:48.422670  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:18:48.422679  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:18:58.423939  298586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:18:58.438628  298586 api_server.go:72] duration metric: took 2m10.437339017s to wait for apiserver process to appear ...
	I1209 23:18:58.438654  298586 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:18:58.438688  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:18:58.438758  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:18:58.481201  298586 cri.go:89] found id: "28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:18:58.481226  298586 cri.go:89] found id: ""
	I1209 23:18:58.481235  298586 logs.go:282] 1 containers: [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4]
	I1209 23:18:58.481292  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.484978  298586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:18:58.485057  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:18:58.522434  298586 cri.go:89] found id: "bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:18:58.522458  298586 cri.go:89] found id: ""
	I1209 23:18:58.522467  298586 logs.go:282] 1 containers: [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3]
	I1209 23:18:58.522524  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.526129  298586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:18:58.526203  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:18:58.564758  298586 cri.go:89] found id: "60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:18:58.564781  298586 cri.go:89] found id: ""
	I1209 23:18:58.564789  298586 logs.go:282] 1 containers: [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a]
	I1209 23:18:58.564846  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.568394  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:18:58.568467  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:18:58.607651  298586 cri.go:89] found id: "9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:18:58.607679  298586 cri.go:89] found id: ""
	I1209 23:18:58.607693  298586 logs.go:282] 1 containers: [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55]
	I1209 23:18:58.607754  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.611400  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:18:58.611479  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:18:58.651381  298586 cri.go:89] found id: "07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:18:58.651404  298586 cri.go:89] found id: ""
	I1209 23:18:58.651412  298586 logs.go:282] 1 containers: [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143]
	I1209 23:18:58.651472  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.655072  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:18:58.655174  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:18:58.694835  298586 cri.go:89] found id: "ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:18:58.694859  298586 cri.go:89] found id: ""
	I1209 23:18:58.694867  298586 logs.go:282] 1 containers: [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7]
	I1209 23:18:58.694925  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.698523  298586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:18:58.698598  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:18:58.738842  298586 cri.go:89] found id: "46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:18:58.738867  298586 cri.go:89] found id: ""
	I1209 23:18:58.738875  298586 logs.go:282] 1 containers: [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c]
	I1209 23:18:58.738931  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.742664  298586 logs.go:123] Gathering logs for etcd [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3] ...
	I1209 23:18:58.742699  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:18:58.795871  298586 logs.go:123] Gathering logs for coredns [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a] ...
	I1209 23:18:58.795904  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:18:58.860291  298586 logs.go:123] Gathering logs for kube-controller-manager [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7] ...
	I1209 23:18:58.860325  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:18:58.948926  298586 logs.go:123] Gathering logs for kindnet [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c] ...
	I1209 23:18:58.948965  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:18:58.988605  298586 logs.go:123] Gathering logs for kubelet ...
	I1209 23:18:58.988635  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 23:18:59.056397  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967476    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.056669  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:59.056860  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.057091  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:59.058395  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.058611  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:18:59.098989  298586 logs.go:123] Gathering logs for dmesg ...
	I1209 23:18:59.099032  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:18:59.115930  298586 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:18:59.115962  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:18:59.275457  298586 logs.go:123] Gathering logs for kube-apiserver [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4] ...
	I1209 23:18:59.275489  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:18:59.344235  298586 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:18:59.344277  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:18:59.444879  298586 logs.go:123] Gathering logs for kube-scheduler [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55] ...
	I1209 23:18:59.444919  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:18:59.492387  298586 logs.go:123] Gathering logs for kube-proxy [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143] ...
	I1209 23:18:59.492424  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:18:59.532088  298586 logs.go:123] Gathering logs for container status ...
	I1209 23:18:59.532121  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:18:59.581440  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:18:59.581466  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 23:18:59.581525  298586 out.go:270] X Problems detected in kubelet:
	W1209 23:18:59.581536  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:59.581542  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.581549  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:59.581557  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.581564  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:18:59.581577  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:18:59.581582  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:19:09.582637  298586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 23:19:09.591661  298586 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 23:19:09.593500  298586 api_server.go:141] control plane version: v1.31.2
	I1209 23:19:09.593529  298586 api_server.go:131] duration metric: took 11.154866294s to wait for apiserver health ...
	I1209 23:19:09.593538  298586 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:19:09.593561  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:19:09.593628  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:19:09.650246  298586 cri.go:89] found id: "28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:19:09.650266  298586 cri.go:89] found id: ""
	I1209 23:19:09.650275  298586 logs.go:282] 1 containers: [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4]
	I1209 23:19:09.650336  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.654062  298586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:19:09.654193  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:19:09.697026  298586 cri.go:89] found id: "bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:19:09.697049  298586 cri.go:89] found id: ""
	I1209 23:19:09.697057  298586 logs.go:282] 1 containers: [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3]
	I1209 23:19:09.697123  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.700847  298586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:19:09.700924  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:19:09.745769  298586 cri.go:89] found id: "60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:19:09.745792  298586 cri.go:89] found id: ""
	I1209 23:19:09.745801  298586 logs.go:282] 1 containers: [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a]
	I1209 23:19:09.745870  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.749699  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:19:09.749776  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:19:09.788613  298586 cri.go:89] found id: "9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:19:09.788640  298586 cri.go:89] found id: ""
	I1209 23:19:09.788649  298586 logs.go:282] 1 containers: [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55]
	I1209 23:19:09.788714  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.792628  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:19:09.792714  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:19:09.838079  298586 cri.go:89] found id: "07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:19:09.838100  298586 cri.go:89] found id: ""
	I1209 23:19:09.838109  298586 logs.go:282] 1 containers: [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143]
	I1209 23:19:09.838171  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.842101  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:19:09.842230  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:19:09.885915  298586 cri.go:89] found id: "ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:19:09.885936  298586 cri.go:89] found id: ""
	I1209 23:19:09.885945  298586 logs.go:282] 1 containers: [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7]
	I1209 23:19:09.886005  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.889892  298586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:19:09.889963  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:19:09.930139  298586 cri.go:89] found id: "46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:19:09.930159  298586 cri.go:89] found id: ""
	I1209 23:19:09.930167  298586 logs.go:282] 1 containers: [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c]
	I1209 23:19:09.930224  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.933971  298586 logs.go:123] Gathering logs for kubelet ...
	I1209 23:19:09.933995  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 23:19:10.006231  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967476    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.006481  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:19:10.006665  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.006890  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:19:10.008211  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.008421  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:19:10.048853  298586 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:19:10.048893  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:19:10.202155  298586 logs.go:123] Gathering logs for kube-proxy [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143] ...
	I1209 23:19:10.202186  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:19:10.246024  298586 logs.go:123] Gathering logs for kube-controller-manager [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7] ...
	I1209 23:19:10.246071  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:19:10.315867  298586 logs.go:123] Gathering logs for container status ...
	I1209 23:19:10.315906  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:19:10.374661  298586 logs.go:123] Gathering logs for kindnet [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c] ...
	I1209 23:19:10.374699  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:19:10.419539  298586 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:19:10.419576  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:19:10.521679  298586 logs.go:123] Gathering logs for dmesg ...
	I1209 23:19:10.521728  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:19:10.542385  298586 logs.go:123] Gathering logs for kube-apiserver [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4] ...
	I1209 23:19:10.542419  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:19:10.622348  298586 logs.go:123] Gathering logs for etcd [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3] ...
	I1209 23:19:10.622386  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:19:10.671460  298586 logs.go:123] Gathering logs for coredns [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a] ...
	I1209 23:19:10.671498  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:19:10.736896  298586 logs.go:123] Gathering logs for kube-scheduler [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55] ...
	I1209 23:19:10.736936  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:19:10.811487  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:19:10.811520  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 23:19:10.811602  298586 out.go:270] X Problems detected in kubelet:
	W1209 23:19:10.811617  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:19:10.811743  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.811760  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:19:10.811767  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.811796  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:19:10.811803  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:19:10.811815  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:19:20.823707  298586 system_pods.go:59] 18 kube-system pods found
	I1209 23:19:20.823748  298586 system_pods.go:61] "coredns-7c65d6cfc9-ps5kv" [ac846172-1271-42aa-9357-9fe66f96d82e] Running
	I1209 23:19:20.823755  298586 system_pods.go:61] "csi-hostpath-attacher-0" [70f07398-e2e6-43c0-8bac-4a6293846c83] Running
	I1209 23:19:20.823760  298586 system_pods.go:61] "csi-hostpath-resizer-0" [34a1d7a1-e66c-4126-8d08-31dd378e5ea6] Running
	I1209 23:19:20.823764  298586 system_pods.go:61] "csi-hostpathplugin-2lbjj" [7f67705b-ffe1-4e1b-a0f2-8bab862d22e9] Running
	I1209 23:19:20.823768  298586 system_pods.go:61] "etcd-addons-006125" [a56e9231-50e7-4e8d-a3c4-93ed5da068e1] Running
	I1209 23:19:20.823772  298586 system_pods.go:61] "kindnet-pshzw" [39fdd361-24a9-4d74-b04d-46b9b70eca6b] Running
	I1209 23:19:20.823799  298586 system_pods.go:61] "kube-apiserver-addons-006125" [df184bbd-bc28-424b-ba7a-bafa22eb9cfc] Running
	I1209 23:19:20.823811  298586 system_pods.go:61] "kube-controller-manager-addons-006125" [1bdae08d-3326-47dc-b837-2887ecec58fa] Running
	I1209 23:19:20.823815  298586 system_pods.go:61] "kube-ingress-dns-minikube" [bfdbb59b-5556-4a7d-87bc-bdfcb11a73cf] Running
	I1209 23:19:20.823821  298586 system_pods.go:61] "kube-proxy-sp7fm" [5e14145c-69bb-4925-9fc5-5222465c4f5c] Running
	I1209 23:19:20.823827  298586 system_pods.go:61] "kube-scheduler-addons-006125" [02d6d85b-621d-44f8-9ab2-7937ef0626bb] Running
	I1209 23:19:20.823832  298586 system_pods.go:61] "metrics-server-84c5f94fbc-mh6kg" [028d5ed7-2cbe-4a41-9585-89a1da10129a] Running
	I1209 23:19:20.823839  298586 system_pods.go:61] "nvidia-device-plugin-daemonset-nqsf9" [ae3a9e66-1569-459a-8a4c-25e166bd28a9] Running
	I1209 23:19:20.823843  298586 system_pods.go:61] "registry-5cc95cd69-s95j5" [0e371bb5-f973-4496-b0af-810240c01f88] Running
	I1209 23:19:20.823846  298586 system_pods.go:61] "registry-proxy-m54xt" [c65f40cc-4e12-46bd-a8c7-12d30baa522c] Running
	I1209 23:19:20.823851  298586 system_pods.go:61] "snapshot-controller-56fcc65765-8jbrz" [97fd1ee6-328a-437b-9179-923246db9b8c] Running
	I1209 23:19:20.823875  298586 system_pods.go:61] "snapshot-controller-56fcc65765-vkc6z" [157f0d6a-9131-4c4a-a3a7-af4d21263013] Running
	I1209 23:19:20.823887  298586 system_pods.go:61] "storage-provisioner" [169f26d0-1747-4bb8-90ce-17759ea05d6b] Running
	I1209 23:19:20.823894  298586 system_pods.go:74] duration metric: took 11.23034998s to wait for pod list to return data ...
	I1209 23:19:20.823907  298586 default_sa.go:34] waiting for default service account to be created ...
	I1209 23:19:20.826774  298586 default_sa.go:45] found service account: "default"
	I1209 23:19:20.826802  298586 default_sa.go:55] duration metric: took 2.88837ms for default service account to be created ...
	I1209 23:19:20.826812  298586 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 23:19:20.837425  298586 system_pods.go:86] 18 kube-system pods found
	I1209 23:19:20.837461  298586 system_pods.go:89] "coredns-7c65d6cfc9-ps5kv" [ac846172-1271-42aa-9357-9fe66f96d82e] Running
	I1209 23:19:20.837470  298586 system_pods.go:89] "csi-hostpath-attacher-0" [70f07398-e2e6-43c0-8bac-4a6293846c83] Running
	I1209 23:19:20.837483  298586 system_pods.go:89] "csi-hostpath-resizer-0" [34a1d7a1-e66c-4126-8d08-31dd378e5ea6] Running
	I1209 23:19:20.837508  298586 system_pods.go:89] "csi-hostpathplugin-2lbjj" [7f67705b-ffe1-4e1b-a0f2-8bab862d22e9] Running
	I1209 23:19:20.837522  298586 system_pods.go:89] "etcd-addons-006125" [a56e9231-50e7-4e8d-a3c4-93ed5da068e1] Running
	I1209 23:19:20.837528  298586 system_pods.go:89] "kindnet-pshzw" [39fdd361-24a9-4d74-b04d-46b9b70eca6b] Running
	I1209 23:19:20.837533  298586 system_pods.go:89] "kube-apiserver-addons-006125" [df184bbd-bc28-424b-ba7a-bafa22eb9cfc] Running
	I1209 23:19:20.837541  298586 system_pods.go:89] "kube-controller-manager-addons-006125" [1bdae08d-3326-47dc-b837-2887ecec58fa] Running
	I1209 23:19:20.837549  298586 system_pods.go:89] "kube-ingress-dns-minikube" [bfdbb59b-5556-4a7d-87bc-bdfcb11a73cf] Running
	I1209 23:19:20.837554  298586 system_pods.go:89] "kube-proxy-sp7fm" [5e14145c-69bb-4925-9fc5-5222465c4f5c] Running
	I1209 23:19:20.837559  298586 system_pods.go:89] "kube-scheduler-addons-006125" [02d6d85b-621d-44f8-9ab2-7937ef0626bb] Running
	I1209 23:19:20.837582  298586 system_pods.go:89] "metrics-server-84c5f94fbc-mh6kg" [028d5ed7-2cbe-4a41-9585-89a1da10129a] Running
	I1209 23:19:20.837595  298586 system_pods.go:89] "nvidia-device-plugin-daemonset-nqsf9" [ae3a9e66-1569-459a-8a4c-25e166bd28a9] Running
	I1209 23:19:20.837613  298586 system_pods.go:89] "registry-5cc95cd69-s95j5" [0e371bb5-f973-4496-b0af-810240c01f88] Running
	I1209 23:19:20.837623  298586 system_pods.go:89] "registry-proxy-m54xt" [c65f40cc-4e12-46bd-a8c7-12d30baa522c] Running
	I1209 23:19:20.837627  298586 system_pods.go:89] "snapshot-controller-56fcc65765-8jbrz" [97fd1ee6-328a-437b-9179-923246db9b8c] Running
	I1209 23:19:20.837631  298586 system_pods.go:89] "snapshot-controller-56fcc65765-vkc6z" [157f0d6a-9131-4c4a-a3a7-af4d21263013] Running
	I1209 23:19:20.837639  298586 system_pods.go:89] "storage-provisioner" [169f26d0-1747-4bb8-90ce-17759ea05d6b] Running
	I1209 23:19:20.837646  298586 system_pods.go:126] duration metric: took 10.827039ms to wait for k8s-apps to be running ...
	I1209 23:19:20.837660  298586 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 23:19:20.837734  298586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:19:20.850687  298586 system_svc.go:56] duration metric: took 13.018367ms WaitForService to wait for kubelet
	I1209 23:19:20.850720  298586 kubeadm.go:582] duration metric: took 2m32.849436375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:19:20.850740  298586 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:19:20.855500  298586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 23:19:20.855547  298586 node_conditions.go:123] node cpu capacity is 2
	I1209 23:19:20.855570  298586 node_conditions.go:105] duration metric: took 4.823482ms to run NodePressure ...
	I1209 23:19:20.855584  298586 start.go:241] waiting for startup goroutines ...
	I1209 23:19:20.855592  298586 start.go:246] waiting for cluster config update ...
	I1209 23:19:20.855614  298586 start.go:255] writing updated cluster config ...
	I1209 23:19:20.855923  298586 ssh_runner.go:195] Run: rm -f paused
	I1209 23:19:21.235289  298586 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 23:19:21.237691  298586 out.go:177] * Done! kubectl is now configured to use "addons-006125" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 23:21:43 addons-006125 crio[961]: time="2024-12-09 23:21:43.702594983Z" level=info msg="Removed pod sandbox: 0d1040d47baf1a77ee54e2a2565046150a73cee2f043a5e259718645f0e08a30" id=4f384f70-4af3-48ab-aac3-4b010fdebae4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.035281278Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-crwqk/POD" id=87064cab-83d3-4131-9e51-dee57895e488 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.035346632Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.076929750Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-crwqk Namespace:default ID:25fedac5b784d9a202f20e1daf2d1c7a5d5fdf1fcbf27d1dd8a8fee17d0aaa7f UID:c0ce9916-090d-4b7a-b41c-4e5b251d60a0 NetNS:/var/run/netns/e63ff563-687d-436d-871d-91e3ab5f42e4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.076972237Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-crwqk to CNI network \"kindnet\" (type=ptp)"
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.097647675Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-crwqk Namespace:default ID:25fedac5b784d9a202f20e1daf2d1c7a5d5fdf1fcbf27d1dd8a8fee17d0aaa7f UID:c0ce9916-090d-4b7a-b41c-4e5b251d60a0 NetNS:/var/run/netns/e63ff563-687d-436d-871d-91e3ab5f42e4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.097824498Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-crwqk for CNI network kindnet (type=ptp)"
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.102246637Z" level=info msg="Ran pod sandbox 25fedac5b784d9a202f20e1daf2d1c7a5d5fdf1fcbf27d1dd8a8fee17d0aaa7f with infra container: default/hello-world-app-55bf9c44b4-crwqk/POD" id=87064cab-83d3-4131-9e51-dee57895e488 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.104146630Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d27c82ce-1a72-4fb3-9fb3-e82e3957762b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.104386723Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d27c82ce-1a72-4fb3-9fb3-e82e3957762b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.107265083Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=c6cdb3cb-c13d-4ed7-a436-0bf1399f4884 name=/runtime.v1.ImageService/PullImage
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.110708044Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 09 23:23:50 addons-006125 crio[961]: time="2024-12-09 23:23:50.397984106Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.218058605Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=c6cdb3cb-c13d-4ed7-a436-0bf1399f4884 name=/runtime.v1.ImageService/PullImage
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.219014595Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=698b6599-b360-4c00-8936-6fa2fe1d8980 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.219753812Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=698b6599-b360-4c00-8936-6fa2fe1d8980 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.223461793Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2e882698-b2b5-463e-b6f6-66582c3d4abb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.226023308Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2e882698-b2b5-463e-b6f6-66582c3d4abb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.227055705Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-crwqk/hello-world-app" id=ada934ba-4cc2-473d-97f4-0bade42420c6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.227269263Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.251875832Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3b193c1cfb9feea48322cdb0de375245d3ce928cadb22625c62e3761d5437f23/merged/etc/passwd: no such file or directory"
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.251924957Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3b193c1cfb9feea48322cdb0de375245d3ce928cadb22625c62e3761d5437f23/merged/etc/group: no such file or directory"
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.314925815Z" level=info msg="Created container 864e68f55a0c6a902cb3f2af3d72c522753b73a7afc04cc5281a2c6e56893538: default/hello-world-app-55bf9c44b4-crwqk/hello-world-app" id=ada934ba-4cc2-473d-97f4-0bade42420c6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.315877686Z" level=info msg="Starting container: 864e68f55a0c6a902cb3f2af3d72c522753b73a7afc04cc5281a2c6e56893538" id=2a4f9e5a-03e6-4d46-b52f-5713ba34f4d2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 23:23:51 addons-006125 crio[961]: time="2024-12-09 23:23:51.331494138Z" level=info msg="Started container" PID=9721 containerID=864e68f55a0c6a902cb3f2af3d72c522753b73a7afc04cc5281a2c6e56893538 description=default/hello-world-app-55bf9c44b4-crwqk/hello-world-app id=2a4f9e5a-03e6-4d46-b52f-5713ba34f4d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=25fedac5b784d9a202f20e1daf2d1c7a5d5fdf1fcbf27d1dd8a8fee17d0aaa7f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	864e68f55a0c6       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   25fedac5b784d       hello-world-app-55bf9c44b4-crwqk
	83be2b3d47e63       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago            Running             nginx                     0                   6ce328d60f74b       nginx
	c613f71753421       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                   0                   f251b28a646eb       busybox
	61562f3c7ba8c       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             5 minutes ago            Running             controller                0                   35b55c586f053       ingress-nginx-controller-5f85ff4588-7wrst
	a81f2583f2542       d54655ed3a8543a162b688a24bf969ee1a28d906b8ccb30188059247efdae234                                                             5 minutes ago            Exited              patch                     3                   ac9f416aea691       ingress-nginx-admission-patch-xmxnz
	8261124c91551       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              create                    0                   425a6b62f351a       ingress-nginx-admission-create-ss8p4
	8191bfee51953       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        6 minutes ago            Running             metrics-server            0                   441138a11451a       metrics-server-84c5f94fbc-mh6kg
	d5d1271c82b80       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             6 minutes ago            Running             minikube-ingress-dns      0                   181af42509b63       kube-ingress-dns-minikube
	60f210579139f       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             6 minutes ago            Running             coredns                   0                   8f608a61a5b01       coredns-7c65d6cfc9-ps5kv
	83cc359be952d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago            Running             storage-provisioner       0                   fb9951ed37ad7       storage-provisioner
	46d2df09c6c2d       docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e                           6 minutes ago            Running             kindnet-cni               0                   acba07dda4adc       kindnet-pshzw
	07527f58e4332       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                             7 minutes ago            Running             kube-proxy                0                   577c7b2136793       kube-proxy-sp7fm
	28972e4f2344f       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                             7 minutes ago            Running             kube-apiserver            0                   e36e3239239f7       kube-apiserver-addons-006125
	9591912eda249       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                             7 minutes ago            Running             kube-scheduler            0                   f9945e404cb30       kube-scheduler-addons-006125
	ebbf87552a8cf       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                             7 minutes ago            Running             kube-controller-manager   0                   e96afb4daa5ed       kube-controller-manager-addons-006125
	bf6e7cfb9e6ee       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             7 minutes ago            Running             etcd                      0                   c79995c7d2ab0       etcd-addons-006125
	
	
	==> coredns [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a] <==
	[INFO] 10.244.0.4:43171 - 47233 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00174201s
	[INFO] 10.244.0.4:43171 - 55270 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000087246s
	[INFO] 10.244.0.4:43171 - 7465 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000037563s
	[INFO] 10.244.0.4:36911 - 47979 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000124407s
	[INFO] 10.244.0.4:36911 - 48201 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000052235s
	[INFO] 10.244.0.4:56399 - 64705 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100891s
	[INFO] 10.244.0.4:56399 - 64277 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046097s
	[INFO] 10.244.0.4:60271 - 38394 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101762s
	[INFO] 10.244.0.4:60271 - 37958 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048099s
	[INFO] 10.244.0.4:39360 - 2473 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001745251s
	[INFO] 10.244.0.4:39360 - 2284 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00133613s
	[INFO] 10.244.0.4:35230 - 4040 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000070335s
	[INFO] 10.244.0.4:35230 - 3620 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036997s
	[INFO] 10.244.0.20:47144 - 20075 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000237032s
	[INFO] 10.244.0.20:38850 - 18722 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000122824s
	[INFO] 10.244.0.20:51184 - 19000 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000181467s
	[INFO] 10.244.0.20:38376 - 46733 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089478s
	[INFO] 10.244.0.20:53006 - 65016 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014058s
	[INFO] 10.244.0.20:43744 - 22939 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110032s
	[INFO] 10.244.0.20:33587 - 31479 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004730426s
	[INFO] 10.244.0.20:39602 - 854 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004921953s
	[INFO] 10.244.0.20:38618 - 600 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000830738s
	[INFO] 10.244.0.20:56177 - 59987 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001784776s
	[INFO] 10.244.0.23:44187 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000213164s
	[INFO] 10.244.0.23:39397 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00013409s
	
	
	==> describe nodes <==
	Name:               addons-006125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-006125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=addons-006125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_16_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-006125
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:16:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-006125
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:23:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:21:59 +0000   Mon, 09 Dec 2024 23:16:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:21:59 +0000   Mon, 09 Dec 2024 23:16:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:21:59 +0000   Mon, 09 Dec 2024 23:16:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:21:59 +0000   Mon, 09 Dec 2024 23:17:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-006125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f86b1de6b3e42aca6bb81d86a17348e
	  System UUID:                27619cfe-0879-4c6d-8dce-4580b148df40
	  Boot ID:                    50e9d5fe-ba16-4119-8482-ef38225f12b8
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  default                     hello-world-app-55bf9c44b4-crwqk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-7wrst    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m56s
	  kube-system                 coredns-7c65d6cfc9-ps5kv                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m1s
	  kube-system                 etcd-addons-006125                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m8s
	  kube-system                 kindnet-pshzw                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m2s
	  kube-system                 kube-apiserver-addons-006125                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-controller-manager-addons-006125        200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 kube-proxy-sp7fm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 kube-scheduler-addons-006125                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 metrics-server-84c5f94fbc-mh6kg              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m56s  kube-proxy       
	  Normal   Starting                 7m8s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m8s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m8s   kubelet          Node addons-006125 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m8s   kubelet          Node addons-006125 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m8s   kubelet          Node addons-006125 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m4s   node-controller  Node addons-006125 event: Registered Node addons-006125 in Controller
	  Normal   NodeReady                6m46s  kubelet          Node addons-006125 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014264] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.469192] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028174] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.034435] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016734] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.679647] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.654401] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 9 22:21] hrtimer: interrupt took 5553077 ns
	[Dec 9 22:45] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3] <==
	{"level":"info","ts":"2024-12-09T23:16:36.724006Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:16:36.724532Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:16:36.725947Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:16:36.731127Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T23:16:36.731174Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T23:16:36.731254Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:16:36.731332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:16:36.731363Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:16:36.731999Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:16:36.732856Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-09T23:16:36.807366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-09T23:16:48.796525Z","caller":"traceutil/trace.go:171","msg":"trace[299301096] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"192.07546ms","start":"2024-12-09T23:16:48.604286Z","end":"2024-12-09T23:16:48.796361Z","steps":["trace[299301096] 'process raft request'  (duration: 124.270688ms)","trace[299301096] 'compare'  (duration: 35.029267ms)","trace[299301096] 'attach lease to kv pair' {req_type:put; key:/registry/minions/addons-006125; req_size:5728; } (duration: 32.631594ms)"],"step_count":3}
	{"level":"info","ts":"2024-12-09T23:16:48.882475Z","caller":"traceutil/trace.go:171","msg":"trace[815781161] linearizableReadLoop","detail":"{readStateIndex:322; appliedIndex:320; }","duration":"225.373031ms","start":"2024-12-09T23:16:48.657091Z","end":"2024-12-09T23:16:48.882464Z","steps":["trace[815781161] 'read index received'  (duration: 71.71146ms)","trace[815781161] 'applied index is now lower than readState.Index'  (duration: 153.660964ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:16:48.882624Z","caller":"traceutil/trace.go:171","msg":"trace[2077674660] transaction","detail":"{read_only:false; response_revision:312; number_of_response:1; }","duration":"251.955611ms","start":"2024-12-09T23:16:48.630658Z","end":"2024-12-09T23:16:48.882613Z","steps":["trace[2077674660] 'process raft request'  (duration: 251.71738ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:48.882738Z","caller":"traceutil/trace.go:171","msg":"trace[273352616] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"225.586104ms","start":"2024-12-09T23:16:48.657145Z","end":"2024-12-09T23:16:48.882731Z","steps":["trace[273352616] 'process raft request'  (duration: 225.297493ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:16:48.882861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.755748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-12-09T23:16:48.882902Z","caller":"traceutil/trace.go:171","msg":"trace[188832223] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:313; }","duration":"225.80753ms","start":"2024-12-09T23:16:48.657086Z","end":"2024-12-09T23:16:48.882894Z","steps":["trace[188832223] 'agreement among raft nodes before linearized reading'  (duration: 225.715845ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:16:48.943514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.352967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-12-09T23:16:48.943577Z","caller":"traceutil/trace.go:171","msg":"trace[2047370642] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:315; }","duration":"147.425355ms","start":"2024-12-09T23:16:48.796140Z","end":"2024-12-09T23:16:48.943565Z","steps":["trace[2047370642] 'agreement among raft nodes before linearized reading'  (duration: 147.312886ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:48.943846Z","caller":"traceutil/trace.go:171","msg":"trace[549763088] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"147.654207ms","start":"2024-12-09T23:16:48.796182Z","end":"2024-12-09T23:16:48.943837Z","steps":["trace[549763088] 'process raft request'  (duration: 134.700603ms)","trace[549763088] 'compare'  (duration: 12.45608ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:16:48.943951Z","caller":"traceutil/trace.go:171","msg":"trace[2056228274] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"147.665505ms","start":"2024-12-09T23:16:48.796278Z","end":"2024-12-09T23:16:48.943944Z","steps":["trace[2056228274] 'process raft request'  (duration: 147.142102ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:51.951702Z","caller":"traceutil/trace.go:171","msg":"trace[1939232064] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"181.261738ms","start":"2024-12-09T23:16:51.770423Z","end":"2024-12-09T23:16:51.951685Z","steps":["trace[1939232064] 'process raft request'  (duration: 158.980378ms)","trace[1939232064] 'compare'  (duration: 21.888592ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:16:51.952073Z","caller":"traceutil/trace.go:171","msg":"trace[1404595988] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"165.556225ms","start":"2024-12-09T23:16:51.786509Z","end":"2024-12-09T23:16:51.952065Z","steps":["trace[1404595988] 'process raft request'  (duration: 164.88073ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:51.952459Z","caller":"traceutil/trace.go:171","msg":"trace[562913129] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"165.761799ms","start":"2024-12-09T23:16:51.786688Z","end":"2024-12-09T23:16:51.952450Z","steps":["trace[562913129] 'process raft request'  (duration: 164.951303ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:51.952610Z","caller":"traceutil/trace.go:171","msg":"trace[1796416080] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"165.529484ms","start":"2024-12-09T23:16:51.787075Z","end":"2024-12-09T23:16:51.952604Z","steps":["trace[1796416080] 'process raft request'  (duration: 164.946281ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:23:51 up  2:06,  0 users,  load average: 0.47, 1.60, 2.20
	Linux addons-006125 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c] <==
	I1209 23:21:45.461738       1 main.go:301] handling current node
	I1209 23:21:55.460869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:21:55.460902       1 main.go:301] handling current node
	I1209 23:22:05.460423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:22:05.460460       1 main.go:301] handling current node
	I1209 23:22:15.469510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:22:15.469547       1 main.go:301] handling current node
	I1209 23:22:25.467057       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:22:25.467223       1 main.go:301] handling current node
	I1209 23:22:35.464636       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:22:35.464670       1 main.go:301] handling current node
	I1209 23:22:45.469625       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:22:45.469660       1 main.go:301] handling current node
	I1209 23:22:55.461153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:22:55.461194       1 main.go:301] handling current node
	I1209 23:23:05.466956       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:23:05.467073       1 main.go:301] handling current node
	I1209 23:23:15.460637       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:23:15.460765       1 main.go:301] handling current node
	I1209 23:23:25.460598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:23:25.460634       1 main.go:301] handling current node
	I1209 23:23:35.460432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:23:35.460467       1 main.go:301] handling current node
	I1209 23:23:45.467728       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:23:45.467766       1 main.go:301] handling current node
	
	
	==> kube-apiserver [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4] <==
	E1209 23:19:58.436141       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1209 23:20:07.594201       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.155.147"}
	E1209 23:20:10.646387       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1209 23:20:10.663522       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1209 23:20:10.700509       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1209 23:20:25.677190       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1209 23:20:54.968554       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 23:21:09.089549       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.089706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:21:09.116632       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.116686       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:21:09.153939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.154095       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:21:09.197657       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.197774       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:21:09.230308       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.230451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 23:21:10.198444       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1209 23:21:10.230566       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 23:21:10.288616       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1209 23:21:22.848323       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 23:21:23.888376       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1209 23:21:28.515637       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 23:21:28.849350       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.48.252"}
	I1209 23:23:49.984903       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.244.252"}
	
	
	==> kube-controller-manager [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7] <==
	W1209 23:22:03.799024       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:22:03.799071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:22:19.151890       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:22:19.151938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:22:23.417029       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:22:23.417079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:22:24.406848       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:22:24.406892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:22:47.866972       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:22:47.867021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:23:03.101321       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:23:03.101411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:23:08.811486       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:23:08.811529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:23:14.877127       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:23:14.877171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:23:31.352438       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:23:31.352489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:23:40.875172       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:23:40.875237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:23:46.676924       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:23:46.676975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 23:23:49.725530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.431971ms"
	I1209 23:23:49.757012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.359558ms"
	I1209 23:23:49.757177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="35.627µs"
	
	
	==> kube-proxy [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143] <==
	I1209 23:16:52.879951       1 server_linux.go:66] "Using iptables proxy"
	I1209 23:16:53.896500       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1209 23:16:53.981292       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:16:54.878461       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 23:16:54.878600       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:16:54.881840       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:16:54.882677       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:16:54.882759       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:16:54.885691       1 config.go:199] "Starting service config controller"
	I1209 23:16:54.885779       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:16:54.885811       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:16:54.885816       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:16:54.886677       1 config.go:328] "Starting node config controller"
	I1209 23:16:54.886731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:16:54.995178       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:16:55.004280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 23:16:54.987245       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55] <==
	W1209 23:16:40.837541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:16:40.838565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:40.837582       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:16:40.838635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:40.837624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 23:16:40.838713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.694127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:16:41.694172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.707528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 23:16:41.707642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.773998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 23:16:41.774060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.784090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:16:41.784134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.801202       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 23:16:41.801247       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 23:16:41.859930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 23:16:41.859992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.879327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 23:16:41.879375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.890899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 23:16:41.891011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.929521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 23:16:41.929637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 23:16:43.609133       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 23:22:03 addons-006125 kubelet[1514]: E1209 23:22:03.390061    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786523389810256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:03 addons-006125 kubelet[1514]: E1209 23:22:03.390100    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786523389810256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:13 addons-006125 kubelet[1514]: E1209 23:22:13.393150    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786533392872766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:13 addons-006125 kubelet[1514]: E1209 23:22:13.393189    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786533392872766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:23 addons-006125 kubelet[1514]: E1209 23:22:23.396841    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786543395693924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:23 addons-006125 kubelet[1514]: E1209 23:22:23.396886    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786543395693924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:33 addons-006125 kubelet[1514]: E1209 23:22:33.399481    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786553399189229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:33 addons-006125 kubelet[1514]: E1209 23:22:33.399523    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786553399189229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:43 addons-006125 kubelet[1514]: E1209 23:22:43.402504    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786563402219854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:43 addons-006125 kubelet[1514]: E1209 23:22:43.402546    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786563402219854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:53 addons-006125 kubelet[1514]: E1209 23:22:53.405717    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786573405465174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:22:53 addons-006125 kubelet[1514]: E1209 23:22:53.405755    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786573405465174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:03 addons-006125 kubelet[1514]: E1209 23:23:03.408478    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786583408235281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:03 addons-006125 kubelet[1514]: E1209 23:23:03.408511    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786583408235281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:13 addons-006125 kubelet[1514]: E1209 23:23:13.410766    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786593410500150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:13 addons-006125 kubelet[1514]: E1209 23:23:13.410805    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786593410500150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:20 addons-006125 kubelet[1514]: I1209 23:23:20.220847    1514 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 23:23:23 addons-006125 kubelet[1514]: E1209 23:23:23.413567    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786603413299122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:23 addons-006125 kubelet[1514]: E1209 23:23:23.414040    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786603413299122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:33 addons-006125 kubelet[1514]: E1209 23:23:33.417244    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786613416990773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:33 addons-006125 kubelet[1514]: E1209 23:23:33.417280    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786613416990773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:43 addons-006125 kubelet[1514]: E1209 23:23:43.420249    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786623419991989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:43 addons-006125 kubelet[1514]: E1209 23:23:43.420291    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786623419991989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:23:49 addons-006125 kubelet[1514]: I1209 23:23:49.731059    1514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=139.098215068 podStartE2EDuration="2m21.731040178s" podCreationTimestamp="2024-12-09 23:21:28 +0000 UTC" firstStartedPulling="2024-12-09 23:21:29.130955946 +0000 UTC m=+286.049738443" lastFinishedPulling="2024-12-09 23:21:31.763781056 +0000 UTC m=+288.682563553" observedRunningTime="2024-12-09 23:21:31.985790018 +0000 UTC m=+288.904572507" watchObservedRunningTime="2024-12-09 23:23:49.731040178 +0000 UTC m=+426.649822667"
	Dec 09 23:23:49 addons-006125 kubelet[1514]: I1209 23:23:49.835663    1514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smgdp\" (UniqueName: \"kubernetes.io/projected/c0ce9916-090d-4b7a-b41c-4e5b251d60a0-kube-api-access-smgdp\") pod \"hello-world-app-55bf9c44b4-crwqk\" (UID: \"c0ce9916-090d-4b7a-b41c-4e5b251d60a0\") " pod="default/hello-world-app-55bf9c44b4-crwqk"
	
	
	==> storage-provisioner [83cc359be952de1780a0d0711ba6424c0cc5987de64528fa84096cb7fbc2c1b0] <==
	I1209 23:17:06.986075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:17:07.030066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:17:07.030210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:17:07.050343       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:17:07.051402       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-006125_33b22b2d-39d5-4d44-8f73-be6eefb60b1a!
	I1209 23:17:07.051550       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40f924e7-c661-441d-bbbc-9188fb45d87d", APIVersion:"v1", ResourceVersion:"875", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-006125_33b22b2d-39d5-4d44-8f73-be6eefb60b1a became leader
	I1209 23:17:07.153203       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-006125_33b22b2d-39d5-4d44-8f73-be6eefb60b1a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006125 -n addons-006125
helpers_test.go:261: (dbg) Run:  kubectl --context addons-006125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-ss8p4 ingress-nginx-admission-patch-xmxnz
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-006125 describe pod ingress-nginx-admission-create-ss8p4 ingress-nginx-admission-patch-xmxnz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-006125 describe pod ingress-nginx-admission-create-ss8p4 ingress-nginx-admission-patch-xmxnz: exit status 1 (91.087878ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ss8p4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xmxnz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-006125 describe pod ingress-nginx-admission-create-ss8p4 ingress-nginx-admission-patch-xmxnz: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-006125 addons disable ingress-dns --alsologtostderr -v=1: (1.314554034s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-006125 addons disable ingress --alsologtostderr -v=1: (7.983392887s)
--- FAIL: TestAddons/parallel/Ingress (154.12s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (290.06s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.966035ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-mh6kg" [028d5ed7-2cbe-4a41-9585-89a1da10129a] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003928025s
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (106.769746ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 4m8.00740709s

                                                
                                                
** /stderr **
I1209 23:20:58.010836  297827 retry.go:31] will retry after 4.116933016s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (90.561041ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 4m12.215590558s

                                                
                                                
** /stderr **
I1209 23:21:02.219220  297827 retry.go:31] will retry after 5.635076561s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (110.597335ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 4m17.962856156s

                                                
                                                
** /stderr **
I1209 23:21:07.965678  297827 retry.go:31] will retry after 8.315010994s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (124.495637ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 4m26.401809226s

                                                
                                                
** /stderr **
I1209 23:21:16.405783  297827 retry.go:31] will retry after 14.18937731s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (134.15933ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 4m40.726724905s

                                                
                                                
** /stderr **
I1209 23:21:30.730050  297827 retry.go:31] will retry after 10.993288414s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (92.119672ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 4m51.812671372s

                                                
                                                
** /stderr **
I1209 23:21:41.815801  297827 retry.go:31] will retry after 11.827493576s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (88.90434ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 5m3.730510653s

                                                
                                                
** /stderr **
I1209 23:21:53.733617  297827 retry.go:31] will retry after 45.204601768s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (94.716836ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 5m49.02989615s

                                                
                                                
** /stderr **
I1209 23:22:39.033286  297827 retry.go:31] will retry after 1m5.873890764s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (151.503452ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 6m55.057356629s

                                                
                                                
** /stderr **
I1209 23:23:45.061250  297827 retry.go:31] will retry after 31.682320997s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (90.634933ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 7m26.834575487s

                                                
                                                
** /stderr **
I1209 23:24:16.838159  297827 retry.go:31] will retry after 47.742693213s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (92.606623ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 8m14.670031704s

                                                
                                                
** /stderr **
I1209 23:25:04.673947  297827 retry.go:31] will retry after 35.058122185s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-006125 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006125 top pods -n kube-system: exit status 1 (91.112131ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ps5kv, age: 8m49.820883559s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-006125
helpers_test.go:235: (dbg) docker inspect addons-006125:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f",
	        "Created": "2024-12-09T23:16:14.180400827Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299081,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-09T23:16:14.345821087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:51526bd7c0894c18bc1ef50650a0aaaea3bed24f70f72f77ac668ae72dfff137",
	        "ResolvConfPath": "/var/lib/docker/containers/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f/hosts",
	        "LogPath": "/var/lib/docker/containers/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f/1c0e3041e6a1d8fc5a9dd836c364732a80e9112598a3c390cfbc264ec577cf6f-json.log",
	        "Name": "/addons-006125",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-006125:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-006125",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cc75547a635283e6768fbb1e623e3138cd28178773a8346f7e6d48d8a039b090-init/diff:/var/lib/docker/overlay2/79ad247dbfb2a02f0d5606be3cc57168963c65e7190a6e757a2f7b99e29945ea/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc75547a635283e6768fbb1e623e3138cd28178773a8346f7e6d48d8a039b090/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc75547a635283e6768fbb1e623e3138cd28178773a8346f7e6d48d8a039b090/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc75547a635283e6768fbb1e623e3138cd28178773a8346f7e6d48d8a039b090/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-006125",
	                "Source": "/var/lib/docker/volumes/addons-006125/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-006125",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-006125",
	                "name.minikube.sigs.k8s.io": "addons-006125",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f0e0c2bf1546dabab93914126310ef3846108b37a10c0264cfd9463d38783b7c",
	            "SandboxKey": "/var/run/docker/netns/f0e0c2bf1546",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-006125": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "471830a571f18c34227cfa076927e612c43f763390187d57c10a2502667e21d9",
	                    "EndpointID": "86b08696a9497ef60a3b05258c4344f178bcfaf1b34440d662fedee2914f7283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-006125",
	                        "1c0e3041e6a1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-006125 -n addons-006125
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-006125 logs -n 25: (1.411314399s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-257686 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | download-docker-257686                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-257686                                                                   | download-docker-257686 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-505134   | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | binary-mirror-505134                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33693                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-505134                                                                     | binary-mirror-505134   | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	| addons  | disable dashboard -p                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | addons-006125                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | addons-006125                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-006125 --wait=true                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:19 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-006125 ip                                                                            | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | -p addons-006125                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-006125 ssh cat                                                                       | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | /opt/local-path-provisioner/pvc-2e1b855f-45ef-4582-80d6-f5a3741f0811_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:21 UTC | 09 Dec 24 23:21 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:21 UTC | 09 Dec 24 23:21 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-006125 addons                                                                        | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:21 UTC | 09 Dec 24 23:21 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-006125 ssh curl -s                                                                   | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:21 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-006125 ip                                                                            | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:23 UTC | 09 Dec 24 23:23 UTC |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:23 UTC | 09 Dec 24 23:23 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-006125 addons disable                                                                | addons-006125          | jenkins | v1.34.0 | 09 Dec 24 23:23 UTC | 09 Dec 24 23:24 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:15:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:15:48.009072  298586 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:15:48.010065  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:15:48.010149  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:15:48.010174  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:15:48.010521  298586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1209 23:15:48.011241  298586 out.go:352] Setting JSON to false
	I1209 23:15:48.012450  298586 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7089,"bootTime":1733779059,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 23:15:48.012591  298586 start.go:139] virtualization:  
	I1209 23:15:48.015679  298586 out.go:177] * [addons-006125] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 23:15:48.018874  298586 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:15:48.019012  298586 notify.go:220] Checking for updates...
	I1209 23:15:48.023671  298586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:15:48.026520  298586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	I1209 23:15:48.028737  298586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	I1209 23:15:48.030918  298586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 23:15:48.033156  298586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:15:48.036177  298586 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:15:48.067551  298586 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:15:48.067686  298586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:15:48.127149  298586 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-09 23:15:48.117752325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:15:48.127266  298586 docker.go:318] overlay module found
	I1209 23:15:48.129900  298586 out.go:177] * Using the docker driver based on user configuration
	I1209 23:15:48.131758  298586 start.go:297] selected driver: docker
	I1209 23:15:48.131782  298586 start.go:901] validating driver "docker" against <nil>
	I1209 23:15:48.131798  298586 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:15:48.132596  298586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:15:48.192793  298586 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-09 23:15:48.184046246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:15:48.193024  298586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:15:48.193254  298586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:15:48.195355  298586 out.go:177] * Using Docker driver with root privileges
	I1209 23:15:48.197128  298586 cni.go:84] Creating CNI manager for ""
	I1209 23:15:48.197205  298586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:15:48.197221  298586 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:15:48.197302  298586 start.go:340] cluster config:
	{Name:addons-006125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:15:48.200787  298586 out.go:177] * Starting "addons-006125" primary control-plane node in "addons-006125" cluster
	I1209 23:15:48.202575  298586 cache.go:121] Beginning downloading kic base image for docker with crio
	I1209 23:15:48.204723  298586 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:15:48.206573  298586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:15:48.206637  298586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1209 23:15:48.206651  298586 cache.go:56] Caching tarball of preloaded images
	I1209 23:15:48.206671  298586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:15:48.206756  298586 preload.go:172] Found /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 23:15:48.206767  298586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:15:48.207164  298586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/config.json ...
	I1209 23:15:48.207198  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/config.json: {Name:mk210deb0807675a1ac7bb384b35a79a82b38cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:15:48.223239  298586 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:15:48.223382  298586 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:15:48.223406  298586 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1209 23:15:48.223416  298586 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1209 23:15:48.223425  298586 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1209 23:15:48.223436  298586 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1209 23:16:06.778276  298586 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1209 23:16:06.778315  298586 cache.go:194] Successfully downloaded all kic artifacts
	I1209 23:16:06.778362  298586 start.go:360] acquireMachinesLock for addons-006125: {Name:mk95fb822276b933d828a80e13ca25416178bd49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:16:06.778494  298586 start.go:364] duration metric: took 108.022µs to acquireMachinesLock for "addons-006125"
	I1209 23:16:06.778526  298586 start.go:93] Provisioning new machine with config: &{Name:addons-006125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:16:06.778620  298586 start.go:125] createHost starting for "" (driver="docker")
	I1209 23:16:06.781156  298586 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1209 23:16:06.781426  298586 start.go:159] libmachine.API.Create for "addons-006125" (driver="docker")
	I1209 23:16:06.781463  298586 client.go:168] LocalClient.Create starting
	I1209 23:16:06.781610  298586 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem
	I1209 23:16:07.078943  298586 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/cert.pem
	I1209 23:16:07.671170  298586 cli_runner.go:164] Run: docker network inspect addons-006125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 23:16:07.687510  298586 cli_runner.go:211] docker network inspect addons-006125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 23:16:07.687609  298586 network_create.go:284] running [docker network inspect addons-006125] to gather additional debugging logs...
	I1209 23:16:07.687632  298586 cli_runner.go:164] Run: docker network inspect addons-006125
	W1209 23:16:07.702664  298586 cli_runner.go:211] docker network inspect addons-006125 returned with exit code 1
	I1209 23:16:07.702701  298586 network_create.go:287] error running [docker network inspect addons-006125]: docker network inspect addons-006125: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-006125 not found
	I1209 23:16:07.702732  298586 network_create.go:289] output of [docker network inspect addons-006125]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-006125 not found
	
	** /stderr **
	I1209 23:16:07.702831  298586 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 23:16:07.719914  298586 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e28c40}
	I1209 23:16:07.719952  298586 network_create.go:124] attempt to create docker network addons-006125 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 23:16:07.720017  298586 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-006125 addons-006125
	I1209 23:16:07.791907  298586 network_create.go:108] docker network addons-006125 192.168.49.0/24 created
	I1209 23:16:07.791941  298586 kic.go:121] calculated static IP "192.168.49.2" for the "addons-006125" container
	I1209 23:16:07.792031  298586 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 23:16:07.808797  298586 cli_runner.go:164] Run: docker volume create addons-006125 --label name.minikube.sigs.k8s.io=addons-006125 --label created_by.minikube.sigs.k8s.io=true
	I1209 23:16:07.824934  298586 oci.go:103] Successfully created a docker volume addons-006125
	I1209 23:16:07.825066  298586 cli_runner.go:164] Run: docker run --rm --name addons-006125-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006125 --entrypoint /usr/bin/test -v addons-006125:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1209 23:16:09.933833  298586 cli_runner.go:217] Completed: docker run --rm --name addons-006125-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006125 --entrypoint /usr/bin/test -v addons-006125:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (2.108716106s)
	I1209 23:16:09.933864  298586 oci.go:107] Successfully prepared a docker volume addons-006125
	I1209 23:16:09.933897  298586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:16:09.933917  298586 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 23:16:09.933991  298586 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006125:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 23:16:14.112748  298586 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006125:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.178695792s)
	I1209 23:16:14.112785  298586 kic.go:203] duration metric: took 4.178864582s to extract preloaded images to volume ...
	W1209 23:16:14.112952  298586 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 23:16:14.113069  298586 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 23:16:14.164785  298586 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-006125 --name addons-006125 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006125 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-006125 --network addons-006125 --ip 192.168.49.2 --volume addons-006125:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1209 23:16:14.525652  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Running}}
	I1209 23:16:14.545990  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:14.567828  298586 cli_runner.go:164] Run: docker exec addons-006125 stat /var/lib/dpkg/alternatives/iptables
	I1209 23:16:14.620520  298586 oci.go:144] the created container "addons-006125" has a running status.
	I1209 23:16:14.620554  298586 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa...
	I1209 23:16:14.919642  298586 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 23:16:14.952954  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:14.981192  298586 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 23:16:14.981211  298586 kic_runner.go:114] Args: [docker exec --privileged addons-006125 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 23:16:15.093238  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:15.118080  298586 machine.go:93] provisionDockerMachine start ...
	I1209 23:16:15.118182  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:15.142294  298586 main.go:141] libmachine: Using SSH client type: native
	I1209 23:16:15.142602  298586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1209 23:16:15.142612  298586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:16:15.145192  298586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42088->127.0.0.1:33138: read: connection reset by peer
	I1209 23:16:18.267158  298586 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006125
	
	I1209 23:16:18.267193  298586 ubuntu.go:169] provisioning hostname "addons-006125"
	I1209 23:16:18.267299  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:18.285418  298586 main.go:141] libmachine: Using SSH client type: native
	I1209 23:16:18.285693  298586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1209 23:16:18.285710  298586 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-006125 && echo "addons-006125" | sudo tee /etc/hostname
	I1209 23:16:18.419316  298586 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006125
	
	I1209 23:16:18.419400  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:18.443085  298586 main.go:141] libmachine: Using SSH client type: native
	I1209 23:16:18.443385  298586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1209 23:16:18.443410  298586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-006125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-006125/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-006125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:16:18.567270  298586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:16:18.567308  298586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19888-292449/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-292449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-292449/.minikube}
	I1209 23:16:18.567332  298586 ubuntu.go:177] setting up certificates
	I1209 23:16:18.567343  298586 provision.go:84] configureAuth start
	I1209 23:16:18.567418  298586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006125
	I1209 23:16:18.585473  298586 provision.go:143] copyHostCerts
	I1209 23:16:18.585596  298586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-292449/.minikube/ca.pem (1082 bytes)
	I1209 23:16:18.585727  298586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-292449/.minikube/cert.pem (1123 bytes)
	I1209 23:16:18.585788  298586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-292449/.minikube/key.pem (1679 bytes)
	I1209 23:16:18.585838  298586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-292449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca-key.pem org=jenkins.addons-006125 san=[127.0.0.1 192.168.49.2 addons-006125 localhost minikube]
	I1209 23:16:19.846315  298586 provision.go:177] copyRemoteCerts
	I1209 23:16:19.846385  298586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:16:19.846431  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:19.863559  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:19.952104  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:16:19.976917  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 23:16:20.013687  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:16:20.073489  298586 provision.go:87] duration metric: took 1.506117982s to configureAuth
	I1209 23:16:20.073610  298586 ubuntu.go:193] setting minikube options for container-runtime
	I1209 23:16:20.073828  298586 config.go:182] Loaded profile config "addons-006125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:16:20.073957  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.091513  298586 main.go:141] libmachine: Using SSH client type: native
	I1209 23:16:20.091803  298586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1209 23:16:20.091827  298586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:16:20.316949  298586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:16:20.316970  298586 machine.go:96] duration metric: took 5.198867058s to provisionDockerMachine
	I1209 23:16:20.316981  298586 client.go:171] duration metric: took 13.535508825s to LocalClient.Create
	I1209 23:16:20.316994  298586 start.go:167] duration metric: took 13.535574229s to libmachine.API.Create "addons-006125"
	I1209 23:16:20.317003  298586 start.go:293] postStartSetup for "addons-006125" (driver="docker")
	I1209 23:16:20.317014  298586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:16:20.317078  298586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:16:20.317123  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.335574  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:20.424728  298586 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:16:20.428241  298586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 23:16:20.428281  298586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1209 23:16:20.428293  298586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1209 23:16:20.428300  298586 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1209 23:16:20.428312  298586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-292449/.minikube/addons for local assets ...
	I1209 23:16:20.428389  298586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-292449/.minikube/files for local assets ...
	I1209 23:16:20.428418  298586 start.go:296] duration metric: took 111.409427ms for postStartSetup
	I1209 23:16:20.428738  298586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006125
	I1209 23:16:20.447222  298586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/config.json ...
	I1209 23:16:20.447529  298586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:16:20.447584  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.464864  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:20.552134  298586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 23:16:20.556856  298586 start.go:128] duration metric: took 13.778218127s to createHost
	I1209 23:16:20.556932  298586 start.go:83] releasing machines lock for "addons-006125", held for 13.778424826s
	I1209 23:16:20.557021  298586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006125
	I1209 23:16:20.575011  298586 ssh_runner.go:195] Run: cat /version.json
	I1209 23:16:20.575079  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.575273  298586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:16:20.575331  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:20.598643  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:20.603521  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:20.820094  298586 ssh_runner.go:195] Run: systemctl --version
	I1209 23:16:20.824866  298586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:16:20.968875  298586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 23:16:20.973371  298586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:16:20.996987  298586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1209 23:16:20.997086  298586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:16:21.042853  298586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1209 23:16:21.042880  298586 start.go:495] detecting cgroup driver to use...
	I1209 23:16:21.042915  298586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 23:16:21.042981  298586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:16:21.062106  298586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:16:21.075258  298586 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:16:21.075347  298586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:16:21.090284  298586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:16:21.106633  298586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:16:21.198993  298586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:16:21.300289  298586 docker.go:233] disabling docker service ...
	I1209 23:16:21.300451  298586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:16:21.324000  298586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:16:21.336566  298586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:16:21.422343  298586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:16:21.512489  298586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:16:21.525048  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:16:21.541869  298586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:16:21.541975  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.551857  298586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:16:21.551990  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.562929  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.574082  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.584854  298586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:16:21.594274  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.604352  298586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.621161  298586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:16:21.631237  298586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:16:21.640622  298586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:16:21.649486  298586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:16:21.729351  298586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:16:21.845168  298586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:16:21.845257  298586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:16:21.848961  298586 start.go:563] Will wait 60s for crictl version
	I1209 23:16:21.849031  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:16:21.852597  298586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:16:21.892477  298586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1209 23:16:21.892600  298586 ssh_runner.go:195] Run: crio --version
	I1209 23:16:21.930523  298586 ssh_runner.go:195] Run: crio --version
	I1209 23:16:21.970129  298586 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1209 23:16:21.971837  298586 cli_runner.go:164] Run: docker network inspect addons-006125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 23:16:21.992162  298586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 23:16:21.995767  298586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:16:22.010671  298586 kubeadm.go:883] updating cluster {Name:addons-006125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:16:22.010814  298586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:16:22.010878  298586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:16:22.091855  298586 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:16:22.091882  298586 crio.go:433] Images already preloaded, skipping extraction
	I1209 23:16:22.091942  298586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:16:22.132163  298586 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:16:22.132188  298586 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:16:22.132197  298586 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1209 23:16:22.132331  298586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-006125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:16:22.132421  298586 ssh_runner.go:195] Run: crio config
	I1209 23:16:22.180839  298586 cni.go:84] Creating CNI manager for ""
	I1209 23:16:22.180915  298586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:16:22.180942  298586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:16:22.180988  298586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-006125 NodeName:addons-006125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:16:22.181144  298586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-006125"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:16:22.181227  298586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:16:22.191686  298586 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:16:22.191803  298586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:16:22.200942  298586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 23:16:22.219486  298586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:16:22.238393  298586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1209 23:16:22.256923  298586 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 23:16:22.260540  298586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:16:22.271531  298586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:16:22.363606  298586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:16:22.378171  298586 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125 for IP: 192.168.49.2
	I1209 23:16:22.378243  298586 certs.go:194] generating shared ca certs ...
	I1209 23:16:22.378275  298586 certs.go:226] acquiring lock for ca certs: {Name:mk059c8f83fb5636d205d77749a6b58de9d7eb72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:22.378921  298586 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-292449/.minikube/ca.key
	I1209 23:16:22.861873  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/ca.crt ...
	I1209 23:16:22.861908  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/ca.crt: {Name:mk9860f7e41edc46298549c904da9356bdddbd82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:22.862558  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/ca.key ...
	I1209 23:16:22.862578  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/ca.key: {Name:mkce41db02665dc8406951414731c623a2fb1b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:22.863100  298586 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.key
	I1209 23:16:23.321386  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.crt ...
	I1209 23:16:23.321417  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.crt: {Name:mk7020e675ca5ad8d2493e6af48756e7c7cfef31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.321623  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.key ...
	I1209 23:16:23.321637  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.key: {Name:mk7fbb3af690707396062e3bf118a4633aabef95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.322360  298586 certs.go:256] generating profile certs ...
	I1209 23:16:23.322427  298586 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.key
	I1209 23:16:23.322446  298586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt with IP's: []
	I1209 23:16:23.622763  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt ...
	I1209 23:16:23.622794  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: {Name:mke74c1e48f47a63def2eed44915a9384d731e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.622978  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.key ...
	I1209 23:16:23.622991  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.key: {Name:mka0b6333c9e6837ad55b080b9aed1423480853d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.623511  298586 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key.cba0d7a3
	I1209 23:16:23.623536  298586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt.cba0d7a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 23:16:23.941703  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt.cba0d7a3 ...
	I1209 23:16:23.941734  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt.cba0d7a3: {Name:mk38a62048871c794f8fcb0fcaaa1a91632d5521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.942475  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key.cba0d7a3 ...
	I1209 23:16:23.942494  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key.cba0d7a3: {Name:mk66bab5632aee0aafbb8d6e409315b562bb1280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:23.943041  298586 certs.go:381] copying /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt.cba0d7a3 -> /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt
	I1209 23:16:23.943165  298586 certs.go:385] copying /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key.cba0d7a3 -> /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key
	I1209 23:16:23.943228  298586 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.key
	I1209 23:16:23.943251  298586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.crt with IP's: []
	I1209 23:16:24.639551  298586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.crt ...
	I1209 23:16:24.639591  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.crt: {Name:mk7aa9d546bcaf8bf1626d6d750cfff08df1915a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:24.640520  298586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.key ...
	I1209 23:16:24.640541  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.key: {Name:mk0253c42106ba746fda4716cda32f8c74383558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:24.640773  298586 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:16:24.640820  298586 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:16:24.640850  298586 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:16:24.640878  298586 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-292449/.minikube/certs/key.pem (1679 bytes)
	I1209 23:16:24.641499  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:16:24.672587  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 23:16:24.696891  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:16:24.722073  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:16:24.746888  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 23:16:24.770829  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:16:24.795602  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:16:24.820070  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:16:24.844891  298586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-292449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:16:24.869484  298586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:16:24.888892  298586 ssh_runner.go:195] Run: openssl version
	I1209 23:16:24.894586  298586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:16:24.904699  298586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:16:24.908359  298586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:16 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:16:24.908466  298586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:16:24.915519  298586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:16:24.925466  298586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:16:24.929107  298586 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 23:16:24.929166  298586 kubeadm.go:392] StartCluster: {Name:addons-006125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-006125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:16:24.929258  298586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:16:24.929318  298586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:16:24.966868  298586 cri.go:89] found id: ""
	I1209 23:16:24.966939  298586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:16:24.976140  298586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:16:24.985234  298586 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1209 23:16:24.985333  298586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:16:24.995052  298586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:16:24.995076  298586 kubeadm.go:157] found existing configuration files:
	
	I1209 23:16:24.995204  298586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:16:25.008963  298586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:16:25.009192  298586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:16:25.020118  298586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:16:25.030685  298586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:16:25.030770  298586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:16:25.040407  298586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:16:25.050454  298586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:16:25.050528  298586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:16:25.059708  298586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:16:25.069137  298586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:16:25.069247  298586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:16:25.079042  298586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 23:16:25.145648  298586 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1072-aws\n", err: exit status 1
	I1209 23:16:25.207898  298586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:16:43.877652  298586 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 23:16:43.877711  298586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:16:43.877798  298586 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1209 23:16:43.877854  298586 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1072-aws
	I1209 23:16:43.877889  298586 kubeadm.go:310] OS: Linux
	I1209 23:16:43.877934  298586 kubeadm.go:310] CGROUPS_CPU: enabled
	I1209 23:16:43.877981  298586 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1209 23:16:43.878028  298586 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1209 23:16:43.878076  298586 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1209 23:16:43.878124  298586 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1209 23:16:43.878173  298586 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1209 23:16:43.878218  298586 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1209 23:16:43.878266  298586 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1209 23:16:43.878313  298586 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1209 23:16:43.878384  298586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:16:43.878478  298586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:16:43.878567  298586 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 23:16:43.878629  298586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:16:43.880713  298586 out.go:235]   - Generating certificates and keys ...
	I1209 23:16:43.880824  298586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:16:43.880897  298586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:16:43.880973  298586 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 23:16:43.881064  298586 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 23:16:43.881141  298586 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 23:16:43.881195  298586 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 23:16:43.881252  298586 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 23:16:43.881368  298586 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-006125 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 23:16:43.881421  298586 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 23:16:43.881540  298586 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-006125 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 23:16:43.881605  298586 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 23:16:43.881667  298586 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 23:16:43.881711  298586 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 23:16:43.881766  298586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:16:43.881816  298586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:16:43.881872  298586 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 23:16:43.881930  298586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:16:43.881992  298586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:16:43.882046  298586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:16:43.882125  298586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:16:43.882191  298586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:16:43.884151  298586 out.go:235]   - Booting up control plane ...
	I1209 23:16:43.884259  298586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:16:43.884345  298586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:16:43.884421  298586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:16:43.884533  298586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:16:43.884627  298586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:16:43.884672  298586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:16:43.884811  298586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 23:16:43.884925  298586 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 23:16:43.884994  298586 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001675956s
	I1209 23:16:43.885076  298586 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 23:16:43.885140  298586 kubeadm.go:310] [api-check] The API server is healthy after 7.001486066s
	I1209 23:16:43.885256  298586 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 23:16:43.885393  298586 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 23:16:43.885459  298586 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 23:16:43.885659  298586 kubeadm.go:310] [mark-control-plane] Marking the node addons-006125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 23:16:43.885722  298586 kubeadm.go:310] [bootstrap-token] Using token: n4ct29.x0cho1mo7j2uiwhv
	I1209 23:16:43.887626  298586 out.go:235]   - Configuring RBAC rules ...
	I1209 23:16:43.887853  298586 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 23:16:43.887978  298586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 23:16:43.888130  298586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 23:16:43.888277  298586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 23:16:43.888398  298586 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 23:16:43.888486  298586 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 23:16:43.888604  298586 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 23:16:43.888652  298586 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 23:16:43.888704  298586 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 23:16:43.888712  298586 kubeadm.go:310] 
	I1209 23:16:43.888772  298586 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 23:16:43.888780  298586 kubeadm.go:310] 
	I1209 23:16:43.888856  298586 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 23:16:43.888863  298586 kubeadm.go:310] 
	I1209 23:16:43.888888  298586 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 23:16:43.888950  298586 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 23:16:43.889005  298586 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 23:16:43.889013  298586 kubeadm.go:310] 
	I1209 23:16:43.889067  298586 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 23:16:43.889074  298586 kubeadm.go:310] 
	I1209 23:16:43.889121  298586 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 23:16:43.889129  298586 kubeadm.go:310] 
	I1209 23:16:43.889182  298586 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 23:16:43.889261  298586 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 23:16:43.889332  298586 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 23:16:43.889340  298586 kubeadm.go:310] 
	I1209 23:16:43.889423  298586 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 23:16:43.889502  298586 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 23:16:43.889510  298586 kubeadm.go:310] 
	I1209 23:16:43.889598  298586 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n4ct29.x0cho1mo7j2uiwhv \
	I1209 23:16:43.889704  298586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ca6f268d2720bcc3dcc63add200af5349fb88d3412781ec48479c46aca637593 \
	I1209 23:16:43.889727  298586 kubeadm.go:310] 	--control-plane 
	I1209 23:16:43.889731  298586 kubeadm.go:310] 
	I1209 23:16:43.889819  298586 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 23:16:43.889826  298586 kubeadm.go:310] 
	I1209 23:16:43.889908  298586 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n4ct29.x0cho1mo7j2uiwhv \
	I1209 23:16:43.890024  298586 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ca6f268d2720bcc3dcc63add200af5349fb88d3412781ec48479c46aca637593 
	I1209 23:16:43.890037  298586 cni.go:84] Creating CNI manager for ""
	I1209 23:16:43.890046  298586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:16:43.893241  298586 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 23:16:43.895219  298586 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 23:16:43.899614  298586 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 23:16:43.899652  298586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 23:16:43.918103  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 23:16:44.212717  298586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:16:44.212872  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:44.212937  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-006125 minikube.k8s.io/updated_at=2024_12_09T23_16_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=addons-006125 minikube.k8s.io/primary=true
	I1209 23:16:44.392783  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:44.392856  298586 ops.go:34] apiserver oom_adj: -16
	I1209 23:16:44.893531  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:45.393302  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:45.892832  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:46.392887  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:46.892881  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:47.392974  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:47.892891  298586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:16:48.000287  298586 kubeadm.go:1113] duration metric: took 3.787466049s to wait for elevateKubeSystemPrivileges
	I1209 23:16:48.000320  298586 kubeadm.go:394] duration metric: took 23.071159301s to StartCluster
	I1209 23:16:48.000340  298586 settings.go:142] acquiring lock: {Name:mk5e8ade0aba5028c542a17cc3ac26b2fce0612a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:48.000573  298586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-292449/kubeconfig
	I1209 23:16:48.001013  298586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-292449/kubeconfig: {Name:mkb1748c465c9240b5ac61d2f2426a68610afd6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:16:48.001257  298586 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:16:48.001488  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 23:16:48.001830  298586 config.go:182] Loaded profile config "addons-006125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:16:48.001869  298586 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 23:16:48.001968  298586 addons.go:69] Setting yakd=true in profile "addons-006125"
	I1209 23:16:48.001983  298586 addons.go:234] Setting addon yakd=true in "addons-006125"
	I1209 23:16:48.002011  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.002685  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.003152  298586 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-006125"
	I1209 23:16:48.003177  298586 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-006125"
	I1209 23:16:48.003221  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.003661  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.008635  298586 addons.go:69] Setting cloud-spanner=true in profile "addons-006125"
	I1209 23:16:48.011043  298586 addons.go:234] Setting addon cloud-spanner=true in "addons-006125"
	I1209 23:16:48.011221  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.011832  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.012138  298586 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-006125"
	I1209 23:16:48.012238  298586 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-006125"
	I1209 23:16:48.012328  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.012911  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.025123  298586 addons.go:69] Setting default-storageclass=true in profile "addons-006125"
	I1209 23:16:48.025175  298586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-006125"
	I1209 23:16:48.025566  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.027914  298586 addons.go:69] Setting gcp-auth=true in profile "addons-006125"
	I1209 23:16:48.028070  298586 mustload.go:65] Loading cluster: addons-006125
	I1209 23:16:48.028545  298586 config.go:182] Loaded profile config "addons-006125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:16:48.029215  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.031587  298586 out.go:177] * Verifying Kubernetes components...
	I1209 23:16:48.044054  298586 addons.go:69] Setting ingress=true in profile "addons-006125"
	I1209 23:16:48.044214  298586 addons.go:234] Setting addon ingress=true in "addons-006125"
	I1209 23:16:48.044298  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.044969  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.059358  298586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:16:48.059679  298586 addons.go:69] Setting volcano=true in profile "addons-006125"
	I1209 23:16:48.059700  298586 addons.go:234] Setting addon volcano=true in "addons-006125"
	I1209 23:16:48.059714  298586 addons.go:69] Setting volumesnapshots=true in profile "addons-006125"
	I1209 23:16:48.059768  298586 addons.go:234] Setting addon volumesnapshots=true in "addons-006125"
	I1209 23:16:48.059836  298586 addons.go:69] Setting ingress-dns=true in profile "addons-006125"
	I1209 23:16:48.059859  298586 addons.go:234] Setting addon ingress-dns=true in "addons-006125"
	I1209 23:16:48.059887  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.060088  298586 addons.go:69] Setting inspektor-gadget=true in profile "addons-006125"
	I1209 23:16:48.060105  298586 addons.go:234] Setting addon inspektor-gadget=true in "addons-006125"
	I1209 23:16:48.060129  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.061455  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.078317  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.078743  298586 addons.go:69] Setting metrics-server=true in profile "addons-006125"
	I1209 23:16:48.078766  298586 addons.go:234] Setting addon metrics-server=true in "addons-006125"
	I1209 23:16:48.078801  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.079288  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.089845  298586 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-006125"
	I1209 23:16:48.089889  298586 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-006125"
	I1209 23:16:48.089933  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.090406  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.109716  298586 addons.go:69] Setting registry=true in profile "addons-006125"
	I1209 23:16:48.109749  298586 addons.go:234] Setting addon registry=true in "addons-006125"
	I1209 23:16:48.109791  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.113710  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.124912  298586 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 23:16:48.129068  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 23:16:48.129100  298586 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 23:16:48.129174  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.129410  298586 addons.go:69] Setting storage-provisioner=true in profile "addons-006125"
	I1209 23:16:48.129443  298586 addons.go:234] Setting addon storage-provisioner=true in "addons-006125"
	I1209 23:16:48.129476  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.129947  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.143216  298586 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-006125"
	I1209 23:16:48.143254  298586 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-006125"
	I1209 23:16:48.143614  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.059842  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.171846  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.059801  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.192101  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.213789  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 23:16:48.218380  298586 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 23:16:48.244636  298586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 23:16:48.248305  298586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:16:48.253145  298586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:16:48.253311  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 23:16:48.256550  298586 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:16:48.269863  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 23:16:48.269968  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.256775  298586 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:16:48.281954  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 23:16:48.282050  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.286886  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 23:16:48.301467  298586 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 23:16:48.307057  298586 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 23:16:48.307080  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 23:16:48.307171  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.269446  298586 addons.go:234] Setting addon default-storageclass=true in "addons-006125"
	I1209 23:16:48.314698  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.315238  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.322970  298586 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 23:16:48.326190  298586 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:16:48.326228  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 23:16:48.326350  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.269551  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.357174  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 23:16:48.367389  298586 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 23:16:48.369612  298586 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 23:16:48.372013  298586 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 23:16:48.372039  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 23:16:48.372108  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.378591  298586 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1209 23:16:48.390681  298586 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:16:48.390765  298586 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:16:48.390882  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.395235  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 23:16:48.397432  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 23:16:48.399553  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 23:16:48.401649  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 23:16:48.403447  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 23:16:48.403472  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 23:16:48.403545  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.407183  298586 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 23:16:48.409959  298586 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 23:16:48.410030  298586 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 23:16:48.410143  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.421277  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.422694  298586 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-006125"
	I1209 23:16:48.422735  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:48.423148  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:48.433714  298586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:16:48.434443  298586 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 23:16:48.434675  298586 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 23:16:48.435484  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 23:16:48.435755  298586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:16:48.437270  298586 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:16:48.437288  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 23:16:48.437348  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.450243  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 23:16:48.450274  298586 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 23:16:48.450339  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	W1209 23:16:48.473128  298586 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 23:16:48.473559  298586 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:16:48.473575  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:16:48.473638  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.523774  298586 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:16:48.523794  298586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:16:48.523859  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.524206  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.528724  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.559486  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.605570  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.634183  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.646370  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.667210  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.668134  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.670439  298586 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 23:16:48.671580  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.672011  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:48.675387  298586 out.go:177]   - Using image docker.io/busybox:stable
	I1209 23:16:48.677870  298586 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:16:48.677890  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 23:16:48.677958  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:48.678382  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	W1209 23:16:48.679470  298586 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 23:16:48.679496  298586 retry.go:31] will retry after 182.427079ms: ssh: handshake failed: EOF
	I1209 23:16:48.716118  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	W1209 23:16:48.718791  298586 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 23:16:48.718823  298586 retry.go:31] will retry after 364.837186ms: ssh: handshake failed: EOF
	I1209 23:16:48.720558  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	W1209 23:16:48.721468  298586 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 23:16:48.721490  298586 retry.go:31] will retry after 307.067348ms: ssh: handshake failed: EOF
	I1209 23:16:48.802728  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 23:16:48.802807  298586 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 23:16:48.926330  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 23:16:48.951487  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:16:48.954461  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 23:16:48.954489  298586 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 23:16:49.065311  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:16:49.085821  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:16:49.100369  298586 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:16:49.100394  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 23:16:49.103123  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:16:49.120519  298586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:16:49.120544  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 23:16:49.129871  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:16:49.133255  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 23:16:49.133280  298586 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 23:16:49.137225  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 23:16:49.137249  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 23:16:49.138797  298586 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 23:16:49.138823  298586 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 23:16:49.275860  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:16:49.282274  298586 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:16:49.282299  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 23:16:49.286019  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 23:16:49.286045  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 23:16:49.289769  298586 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:16:49.289795  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 23:16:49.323398  298586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:16:49.323424  298586 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:16:49.334773  298586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 23:16:49.334799  298586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 23:16:49.378843  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:16:49.401024  298586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 23:16:49.401051  298586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 23:16:49.474402  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:16:49.478757  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 23:16:49.478785  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 23:16:49.480708  298586 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:16:49.480734  298586 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:16:49.500730  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:16:49.545561  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:16:49.549094  298586 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 23:16:49.549118  298586 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 23:16:49.621694  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:16:49.625084  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 23:16:49.625111  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 23:16:49.728718  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 23:16:49.728745  298586 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 23:16:49.792850  298586 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 23:16:49.792878  298586 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 23:16:49.877574  298586 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:16:49.877599  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 23:16:49.945027  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:16:49.950768  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 23:16:49.950795  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 23:16:50.001709  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 23:16:50.001741  298586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 23:16:50.134350  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 23:16:50.134376  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 23:16:50.249470  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 23:16:50.249493  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 23:16:50.331566  298586 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:16:50.331595  298586 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 23:16:50.397821  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:16:50.544717  298586 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.109201235s)
	I1209 23:16:50.544750  298586 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1209 23:16:50.545766  298586 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.10999125s)
	I1209 23:16:50.546511  298586 node_ready.go:35] waiting up to 6m0s for node "addons-006125" to be "Ready" ...
	I1209 23:16:52.032878  298586 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-006125" context rescaled to 1 replicas
	I1209 23:16:52.722785  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:16:52.906228  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.979810589s)
	I1209 23:16:52.906335  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.954826633s)
	I1209 23:16:55.052186  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:16:55.366469  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.263311676s)
	I1209 23:16:55.366541  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.236648785s)
	I1209 23:16:55.366613  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.090729695s)
	I1209 23:16:55.366635  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.987770043s)
	I1209 23:16:55.366831  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.89240078s)
	I1209 23:16:55.366844  298586 addons.go:475] Verifying addon registry=true in "addons-006125"
	I1209 23:16:55.367014  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.28058186s)
	I1209 23:16:55.367149  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.866388668s)
	I1209 23:16:55.367525  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.302178437s)
	I1209 23:16:55.367585  298586 addons.go:475] Verifying addon ingress=true in "addons-006125"
	I1209 23:16:55.367667  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.422612904s)
	W1209 23:16:55.367709  298586 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:16:55.367734  298586 retry.go:31] will retry after 139.542213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:16:55.367536  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.821946713s)
	I1209 23:16:55.367595  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.745870501s)
	I1209 23:16:55.367966  298586 addons.go:475] Verifying addon metrics-server=true in "addons-006125"
	I1209 23:16:55.372469  298586 out.go:177] * Verifying registry addon...
	I1209 23:16:55.372652  298586 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-006125 service yakd-dashboard -n yakd-dashboard
	
	I1209 23:16:55.372721  298586 out.go:177] * Verifying ingress addon...
	I1209 23:16:55.375408  298586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 23:16:55.375941  298586 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 23:16:55.396092  298586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:16:55.396174  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:55.401547  298586 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 23:16:55.401572  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 23:16:55.403881  298586 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1209 23:16:55.507737  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:16:55.801003  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.403131404s)
	I1209 23:16:55.801101  298586 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-006125"
	I1209 23:16:55.803999  298586 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 23:16:55.806832  298586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 23:16:55.829781  298586 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:16:55.829862  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:55.885344  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:55.886431  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:56.311937  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:56.413144  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:56.414286  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:56.811467  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:56.879336  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:56.879822  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:57.311344  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:57.411983  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:57.412562  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:57.550233  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:16:57.811480  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:57.879342  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:57.880330  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:58.311734  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:58.332372  298586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.824536578s)
	I1209 23:16:58.411749  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:58.412473  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:58.811578  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:58.879799  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:58.880979  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:58.968069  298586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 23:16:58.968176  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:58.987651  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:59.093927  298586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 23:16:59.113132  298586 addons.go:234] Setting addon gcp-auth=true in "addons-006125"
	I1209 23:16:59.113187  298586 host.go:66] Checking if "addons-006125" exists ...
	I1209 23:16:59.113686  298586 cli_runner.go:164] Run: docker container inspect addons-006125 --format={{.State.Status}}
	I1209 23:16:59.131538  298586 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 23:16:59.131597  298586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006125
	I1209 23:16:59.148822  298586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/addons-006125/id_rsa Username:docker}
	I1209 23:16:59.253208  298586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:16:59.255221  298586 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 23:16:59.257080  298586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 23:16:59.257100  298586 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 23:16:59.275815  298586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 23:16:59.275843  298586 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 23:16:59.294908  298586 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:16:59.294934  298586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 23:16:59.311376  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:59.317949  298586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:16:59.380905  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:59.384654  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:16:59.550921  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:16:59.829569  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:16:59.851541  298586 addons.go:475] Verifying addon gcp-auth=true in "addons-006125"
	I1209 23:16:59.855305  298586 out.go:177] * Verifying gcp-auth addon...
	I1209 23:16:59.859556  298586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 23:16:59.866178  298586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 23:16:59.866265  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:16:59.964330  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:16:59.965287  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:00.344222  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:00.382147  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:00.396814  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:00.406737  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:00.814718  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:00.863674  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:00.880548  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:00.880810  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:01.315893  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:01.364110  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:01.379938  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:01.380718  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:01.552997  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:17:01.811465  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:01.864202  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:01.880708  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:01.881741  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:02.312176  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:02.364000  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:02.379394  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:02.380126  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:02.811717  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:02.863744  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:02.880470  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:02.881685  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:03.310921  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:03.365059  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:03.379795  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:03.380838  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:03.812865  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:03.864582  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:03.879005  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:03.880040  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:04.050291  298586 node_ready.go:53] node "addons-006125" has status "Ready":"False"
	I1209 23:17:04.312028  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:04.364045  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:04.380408  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:04.381459  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:04.812985  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:04.864261  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:04.881121  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:04.881185  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:05.311411  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:05.364631  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:05.380016  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:05.380964  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:05.811952  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:05.863967  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:05.879445  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:05.880274  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:06.089423  298586 node_ready.go:49] node "addons-006125" has status "Ready":"True"
	I1209 23:17:06.089509  298586 node_ready.go:38] duration metric: took 15.542971108s for node "addons-006125" to be "Ready" ...
	I1209 23:17:06.107753  298586 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:17:06.183008  298586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ps5kv" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:06.422810  298586 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:17:06.422895  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:06.424640  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:06.425328  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:06.425871  298586 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:17:06.425910  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:06.846248  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:06.871846  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:06.888433  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:06.888929  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:07.317097  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:07.415103  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:07.415888  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:07.421525  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:07.816282  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:07.863494  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:07.882177  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:07.884576  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:08.217281  298586 pod_ready.go:103] pod "coredns-7c65d6cfc9-ps5kv" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:08.314192  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:08.365161  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:08.386503  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:08.386848  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:08.690766  298586 pod_ready.go:93] pod "coredns-7c65d6cfc9-ps5kv" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.690795  298586 pod_ready.go:82] duration metric: took 2.507745046s for pod "coredns-7c65d6cfc9-ps5kv" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.690824  298586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.698898  298586 pod_ready.go:93] pod "etcd-addons-006125" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.698927  298586 pod_ready.go:82] duration metric: took 8.093412ms for pod "etcd-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.698943  298586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.708154  298586 pod_ready.go:93] pod "kube-apiserver-addons-006125" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.708182  298586 pod_ready.go:82] duration metric: took 9.228161ms for pod "kube-apiserver-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.708194  298586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.718690  298586 pod_ready.go:93] pod "kube-controller-manager-addons-006125" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.718716  298586 pod_ready.go:82] duration metric: took 10.514388ms for pod "kube-controller-manager-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.718734  298586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sp7fm" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.726434  298586 pod_ready.go:93] pod "kube-proxy-sp7fm" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:08.726460  298586 pod_ready.go:82] duration metric: took 7.717899ms for pod "kube-proxy-sp7fm" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.726473  298586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:08.813152  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:08.863966  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:08.882444  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:08.883358  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:09.087914  298586 pod_ready.go:93] pod "kube-scheduler-addons-006125" in "kube-system" namespace has status "Ready":"True"
	I1209 23:17:09.087942  298586 pod_ready.go:82] duration metric: took 361.459747ms for pod "kube-scheduler-addons-006125" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:09.087956  298586 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace to be "Ready" ...
	I1209 23:17:09.311927  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:09.363885  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:09.380475  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:09.381644  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:09.812577  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:09.863379  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:09.880565  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:09.881356  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:10.312404  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:10.363689  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:10.380923  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:10.381417  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:10.813486  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:10.864194  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:10.881471  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:10.882503  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:11.096221  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:11.313045  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:11.366381  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:11.416954  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:11.417983  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:11.812597  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:11.863436  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:11.879681  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:11.880786  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:12.312391  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:12.363456  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:12.379078  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:12.382221  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:12.812570  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:12.863842  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:12.883692  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:12.887178  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:13.096489  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:13.314029  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:13.363610  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:13.380114  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:13.380692  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:13.815670  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:13.912422  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:13.913069  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:13.913927  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:14.312856  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:14.364026  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:14.380020  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:14.380896  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:14.812267  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:14.863605  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:14.880198  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:14.880432  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:15.097361  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:15.311825  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:15.363263  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:15.380040  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:15.380705  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:15.811425  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:15.863770  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:15.880133  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:15.880554  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:16.312809  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:16.364595  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:16.383580  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:16.384234  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:16.812265  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:16.864444  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:16.880799  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:16.883888  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:17.312490  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:17.364298  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:17.381777  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:17.382517  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:17.622143  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:17.812224  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:17.864363  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:17.882495  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:17.883905  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:18.316017  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:18.365160  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:18.381494  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:18.385643  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:18.814386  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:18.863732  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:18.880350  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:18.881458  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:19.313741  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:19.363431  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:19.381452  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:19.381829  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:19.811600  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:19.864273  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:19.885564  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:19.888042  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:20.102498  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:20.312480  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:20.364634  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:20.381135  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:20.382824  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:20.811756  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:20.864560  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:20.882332  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:20.883774  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:21.316134  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:21.365044  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:21.383392  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:21.384884  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:21.811771  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:21.863656  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:21.881659  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:21.882674  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:22.311464  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:22.363462  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:22.379244  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:22.382218  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:22.595857  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:22.813552  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:22.864902  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:22.880203  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:22.881233  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:23.314448  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:23.368036  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:23.380228  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:23.382417  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:23.812722  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:23.864067  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:23.881749  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:23.883028  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:24.312369  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:24.412934  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:24.412977  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:24.413424  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:24.616158  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:24.812309  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:24.863503  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:24.879095  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:24.881150  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:25.312852  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:25.363195  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:25.380255  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:25.382627  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:25.812070  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:25.865060  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:25.880790  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:25.882270  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:26.312937  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:26.366020  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:26.380498  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:26.380722  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:26.812102  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:26.863832  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:26.882111  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:26.883622  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:27.097843  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:27.312168  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:27.363068  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:27.379735  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:27.380944  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:27.813102  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:27.863102  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:27.880816  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:27.881001  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:28.312052  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:28.364383  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:28.381949  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:28.382532  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:28.812215  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:28.863774  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:28.879961  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:28.880967  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:29.312147  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:29.412221  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:29.412774  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:29.414507  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:29.622361  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:29.811586  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:29.863730  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:29.882122  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:29.883979  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:30.312368  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:30.363776  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:30.380546  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:30.383477  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:30.812716  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:30.865024  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:30.884388  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:30.885960  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:31.330289  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:31.372828  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:31.412338  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:31.412598  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:31.811247  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:31.864354  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:31.883488  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:31.884586  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:32.096086  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:32.312512  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:32.363868  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:32.380076  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:32.381233  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:32.812760  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:32.863332  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:32.880556  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:32.880799  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:33.311929  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:33.363312  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:33.379648  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:33.381617  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:33.812506  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:33.863515  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:33.881197  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:33.881510  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:34.096697  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:34.313777  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:34.365097  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:34.385236  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:34.387053  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:34.814112  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:34.863631  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:34.880430  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:34.881638  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:35.316682  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:35.365726  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:35.382704  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:35.384097  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:35.817793  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:35.867734  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:35.885453  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:35.889607  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:36.101866  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:36.312490  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:36.364673  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:36.383494  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:36.387063  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:36.812350  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:36.885013  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:36.886838  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:36.888539  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:37.314077  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:37.368964  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:37.381191  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:37.384587  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:37.827127  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:37.863657  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:37.881327  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:37.882560  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:38.105162  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:38.313863  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:38.370014  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:38.404708  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:38.406854  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:38.813624  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:38.863853  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:38.914098  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:38.915409  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:39.314685  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:39.415001  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:39.415648  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:39.416477  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:39.813158  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:39.863687  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:39.883872  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:39.887524  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:40.312624  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:40.364111  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:40.382837  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:40.384931  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:40.595468  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:40.812527  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:40.866301  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:40.884568  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:40.885946  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:41.312834  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:41.363626  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:41.382471  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:41.383307  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:41.812076  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:41.864109  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:41.882143  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:41.882615  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:42.314945  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:42.366375  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:42.382313  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:42.384550  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:42.597304  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:42.813166  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:42.863940  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:42.881947  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:42.883289  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:43.311720  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:43.364605  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:43.380989  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:43.384648  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:43.813378  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:43.863679  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:43.879666  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:43.880331  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:44.312420  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:44.369140  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:44.384343  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:44.385674  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:44.813218  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:44.864558  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:44.881474  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:44.882979  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:45.095781  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:45.312855  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:45.365728  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:45.389349  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:45.398566  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:45.812103  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:45.864942  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:45.884618  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:45.885647  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:46.314968  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:46.363580  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:46.385505  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:46.387027  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:46.812988  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:46.863930  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:46.881953  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:46.883536  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:47.099987  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:47.316578  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:47.363213  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:47.382082  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:47.383700  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:47.811816  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:47.863688  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:47.880426  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:47.881395  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:48.316002  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:48.364599  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:48.379342  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:48.381218  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:48.812564  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:48.863563  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:48.886276  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:48.887622  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:49.101211  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:49.315998  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:49.364411  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:49.390088  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:49.390979  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:49.813388  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:49.865590  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:49.883510  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:49.885222  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:50.312725  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:50.363277  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:50.382203  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:50.383629  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:50.812560  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:50.863096  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:50.892716  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:50.901021  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:51.315653  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:51.363392  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:51.380507  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:51.383951  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:51.595563  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:51.814118  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:51.863655  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:51.880775  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:51.882430  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:52.317074  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:52.365951  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:52.392122  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:52.396532  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:52.817551  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:52.863331  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:52.880833  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:17:52.882287  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:53.317095  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:53.363770  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:53.379589  298586 kapi.go:107] duration metric: took 58.00417968s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 23:17:53.380974  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:53.812327  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:53.863561  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:53.881084  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:54.095405  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:54.312060  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:54.363372  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:54.380300  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:54.811888  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:54.863785  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:54.880450  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:55.311982  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:55.362853  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:55.380431  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:55.812121  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:55.869676  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:55.884092  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:56.097706  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:56.313318  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:56.364439  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:56.380881  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:56.812636  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:56.864119  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:56.881048  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:57.313259  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:57.363617  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:57.383005  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:57.813028  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:57.863767  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:57.882984  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:58.125547  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:17:58.312084  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:58.363907  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:58.380377  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:58.812568  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:58.863019  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:58.880246  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:59.319529  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:59.418505  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:59.419494  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:17:59.813734  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:17:59.863177  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:17:59.880680  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:00.143495  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:00.333246  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:00.382434  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:00.398573  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:00.811784  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:00.863601  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:00.880978  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:01.313533  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:01.415309  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:01.416420  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:01.813224  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:01.871805  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:01.882373  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:02.313427  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:02.363852  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:02.380819  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:02.594258  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:02.814829  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:02.864251  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:02.883551  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:03.315529  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:03.364101  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:03.380829  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:03.812594  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:03.864699  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:03.881418  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:04.312769  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:04.362875  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:04.396542  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:04.595674  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:04.812294  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:04.863920  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:04.880131  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:05.317265  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:05.364410  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:05.381936  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:05.813148  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:05.865226  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:05.881709  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:06.324240  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:06.363825  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:06.380047  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:06.597218  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:06.825112  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:06.910817  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:06.912067  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:07.313921  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:07.412728  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:07.414009  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:07.812513  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:07.863713  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:07.881008  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:08.311422  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:08.363342  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:08.380815  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:08.812263  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:08.863134  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:08.880327  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:09.094263  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:09.312277  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:09.363459  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:09.380489  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:09.814632  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:09.864071  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:09.880625  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:10.312592  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:10.364963  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:10.380558  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:10.811206  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:10.865729  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:10.882767  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:11.095565  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:11.312244  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:11.411518  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:18:11.412935  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:11.812388  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:11.863690  298586 kapi.go:107] duration metric: took 1m12.004144578s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 23:18:11.866366  298586 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-006125 cluster.
	I1209 23:18:11.868769  298586 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 23:18:11.871093  298586 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 23:18:11.880263  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:12.311680  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:12.381782  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:12.812033  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:12.880563  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:13.097204  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:13.326123  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:13.416180  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:13.812139  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:13.881172  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:14.315394  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:14.419570  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:14.813220  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:14.881690  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:15.097848  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:15.312842  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:15.380758  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:15.819321  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:15.881221  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:16.313528  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:16.381432  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:16.813899  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:16.881148  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:17.312563  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:17.414240  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:17.595553  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:17.813730  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:17.881637  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:18.312993  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:18.382196  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:18.813842  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:18.880788  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:19.313231  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:19.382665  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:19.811742  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:19.914082  298586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:18:20.111624  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:20.313222  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:20.380078  298586 kapi.go:107] duration metric: took 1m25.004131993s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 23:18:20.811761  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:21.312057  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:21.811938  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:22.311858  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:22.594589  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:22.812256  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:23.312509  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:23.812860  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:24.312416  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:24.594841  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:24.812100  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:25.312839  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:25.812268  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:26.312025  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:26.814334  298586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:18:27.094841  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:27.311887  298586 kapi.go:107] duration metric: took 1m31.505053736s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 23:18:27.314827  298586 out.go:177] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, inspektor-gadget, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1209 23:18:27.317437  298586 addons.go:510] duration metric: took 1m39.315557983s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner inspektor-gadget ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1209 23:18:29.094982  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:31.095350  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:33.595150  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:36.095070  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:38.594342  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:40.594691  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:42.595539  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:45.111480  298586 pod_ready.go:103] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"False"
	I1209 23:18:47.094867  298586 pod_ready.go:93] pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace has status "Ready":"True"
	I1209 23:18:47.094894  298586 pod_ready.go:82] duration metric: took 1m38.006928905s for pod "metrics-server-84c5f94fbc-mh6kg" in "kube-system" namespace to be "Ready" ...
	I1209 23:18:47.094908  298586 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nqsf9" in "kube-system" namespace to be "Ready" ...
	I1209 23:18:47.100404  298586 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nqsf9" in "kube-system" namespace has status "Ready":"True"
	I1209 23:18:47.100435  298586 pod_ready.go:82] duration metric: took 5.518194ms for pod "nvidia-device-plugin-daemonset-nqsf9" in "kube-system" namespace to be "Ready" ...
	I1209 23:18:47.100458  298586 pod_ready.go:39] duration metric: took 1m40.992639961s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:18:47.100473  298586 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:18:47.101048  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:18:47.101141  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:18:47.164828  298586 cri.go:89] found id: "28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:18:47.164859  298586 cri.go:89] found id: ""
	I1209 23:18:47.164867  298586 logs.go:282] 1 containers: [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4]
	I1209 23:18:47.164946  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.169793  298586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:18:47.169872  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:18:47.225891  298586 cri.go:89] found id: "bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:18:47.225915  298586 cri.go:89] found id: ""
	I1209 23:18:47.225924  298586 logs.go:282] 1 containers: [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3]
	I1209 23:18:47.226012  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.229838  298586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:18:47.229956  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:18:47.283100  298586 cri.go:89] found id: "60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:18:47.283163  298586 cri.go:89] found id: ""
	I1209 23:18:47.283173  298586 logs.go:282] 1 containers: [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a]
	I1209 23:18:47.283235  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.287525  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:18:47.287650  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:18:47.328787  298586 cri.go:89] found id: "9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:18:47.328809  298586 cri.go:89] found id: ""
	I1209 23:18:47.328817  298586 logs.go:282] 1 containers: [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55]
	I1209 23:18:47.328878  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.332874  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:18:47.332949  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:18:47.378532  298586 cri.go:89] found id: "07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:18:47.378605  298586 cri.go:89] found id: ""
	I1209 23:18:47.378618  298586 logs.go:282] 1 containers: [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143]
	I1209 23:18:47.378818  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.382909  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:18:47.383028  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:18:47.427889  298586 cri.go:89] found id: "ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:18:47.427914  298586 cri.go:89] found id: ""
	I1209 23:18:47.427923  298586 logs.go:282] 1 containers: [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7]
	I1209 23:18:47.427990  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.432484  298586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:18:47.432565  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:18:47.475022  298586 cri.go:89] found id: "46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:18:47.475046  298586 cri.go:89] found id: ""
	I1209 23:18:47.475061  298586 logs.go:282] 1 containers: [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c]
	I1209 23:18:47.475175  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:47.481147  298586 logs.go:123] Gathering logs for dmesg ...
	I1209 23:18:47.481223  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:18:47.499467  298586 logs.go:123] Gathering logs for etcd [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3] ...
	I1209 23:18:47.499496  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:18:47.550999  298586 logs.go:123] Gathering logs for coredns [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a] ...
	I1209 23:18:47.551032  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:18:47.627242  298586 logs.go:123] Gathering logs for kube-proxy [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143] ...
	I1209 23:18:47.627277  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:18:47.680809  298586 logs.go:123] Gathering logs for kube-controller-manager [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7] ...
	I1209 23:18:47.680840  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:18:47.752097  298586 logs.go:123] Gathering logs for kindnet [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c] ...
	I1209 23:18:47.752135  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:18:47.790826  298586 logs.go:123] Gathering logs for kubelet ...
	I1209 23:18:47.790858  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 23:18:47.868101  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967476    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:47.868380  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:47.868574  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:47.868803  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:47.870079  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:18:47.870312  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:18:47.909575  298586 logs.go:123] Gathering logs for kube-apiserver [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4] ...
	I1209 23:18:47.909612  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:18:47.987180  298586 logs.go:123] Gathering logs for kube-scheduler [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55] ...
	I1209 23:18:47.987300  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:18:48.057088  298586 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:18:48.057140  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:18:48.157233  298586 logs.go:123] Gathering logs for container status ...
	I1209 23:18:48.157274  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:18:48.212076  298586 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:18:48.212108  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:18:48.422529  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:18:48.422557  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 23:18:48.422609  298586 out.go:270] X Problems detected in kubelet:
	W1209 23:18:48.422628  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:48.422635  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:48.422646  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:48.422653  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:18:48.422664  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:18:48.422670  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:18:48.422679  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:18:58.423939  298586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:18:58.438628  298586 api_server.go:72] duration metric: took 2m10.437339017s to wait for apiserver process to appear ...
	I1209 23:18:58.438654  298586 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:18:58.438688  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:18:58.438758  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:18:58.481201  298586 cri.go:89] found id: "28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:18:58.481226  298586 cri.go:89] found id: ""
	I1209 23:18:58.481235  298586 logs.go:282] 1 containers: [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4]
	I1209 23:18:58.481292  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.484978  298586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:18:58.485057  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:18:58.522434  298586 cri.go:89] found id: "bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:18:58.522458  298586 cri.go:89] found id: ""
	I1209 23:18:58.522467  298586 logs.go:282] 1 containers: [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3]
	I1209 23:18:58.522524  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.526129  298586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:18:58.526203  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:18:58.564758  298586 cri.go:89] found id: "60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:18:58.564781  298586 cri.go:89] found id: ""
	I1209 23:18:58.564789  298586 logs.go:282] 1 containers: [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a]
	I1209 23:18:58.564846  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.568394  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:18:58.568467  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:18:58.607651  298586 cri.go:89] found id: "9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:18:58.607679  298586 cri.go:89] found id: ""
	I1209 23:18:58.607693  298586 logs.go:282] 1 containers: [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55]
	I1209 23:18:58.607754  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.611400  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:18:58.611479  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:18:58.651381  298586 cri.go:89] found id: "07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:18:58.651404  298586 cri.go:89] found id: ""
	I1209 23:18:58.651412  298586 logs.go:282] 1 containers: [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143]
	I1209 23:18:58.651472  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.655072  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:18:58.655174  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:18:58.694835  298586 cri.go:89] found id: "ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:18:58.694859  298586 cri.go:89] found id: ""
	I1209 23:18:58.694867  298586 logs.go:282] 1 containers: [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7]
	I1209 23:18:58.694925  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.698523  298586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:18:58.698598  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:18:58.738842  298586 cri.go:89] found id: "46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:18:58.738867  298586 cri.go:89] found id: ""
	I1209 23:18:58.738875  298586 logs.go:282] 1 containers: [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c]
	I1209 23:18:58.738931  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:18:58.742664  298586 logs.go:123] Gathering logs for etcd [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3] ...
	I1209 23:18:58.742699  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:18:58.795871  298586 logs.go:123] Gathering logs for coredns [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a] ...
	I1209 23:18:58.795904  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:18:58.860291  298586 logs.go:123] Gathering logs for kube-controller-manager [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7] ...
	I1209 23:18:58.860325  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:18:58.948926  298586 logs.go:123] Gathering logs for kindnet [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c] ...
	I1209 23:18:58.948965  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:18:58.988605  298586 logs.go:123] Gathering logs for kubelet ...
	I1209 23:18:58.988635  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 23:18:59.056397  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967476    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.056669  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:59.056860  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.057091  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:59.058395  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.058611  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:18:59.098989  298586 logs.go:123] Gathering logs for dmesg ...
	I1209 23:18:59.099032  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:18:59.115930  298586 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:18:59.115962  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:18:59.275457  298586 logs.go:123] Gathering logs for kube-apiserver [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4] ...
	I1209 23:18:59.275489  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:18:59.344235  298586 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:18:59.344277  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:18:59.444879  298586 logs.go:123] Gathering logs for kube-scheduler [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55] ...
	I1209 23:18:59.444919  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:18:59.492387  298586 logs.go:123] Gathering logs for kube-proxy [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143] ...
	I1209 23:18:59.492424  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:18:59.532088  298586 logs.go:123] Gathering logs for container status ...
	I1209 23:18:59.532121  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:18:59.581440  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:18:59.581466  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 23:18:59.581525  298586 out.go:270] X Problems detected in kubelet:
	W1209 23:18:59.581536  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:59.581542  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.581549  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:18:59.581557  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:18:59.581564  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:18:59.581577  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:18:59.581582  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:19:09.582637  298586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 23:19:09.591661  298586 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 23:19:09.593500  298586 api_server.go:141] control plane version: v1.31.2
	I1209 23:19:09.593529  298586 api_server.go:131] duration metric: took 11.154866294s to wait for apiserver health ...
	I1209 23:19:09.593538  298586 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:19:09.593561  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:19:09.593628  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:19:09.650246  298586 cri.go:89] found id: "28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:19:09.650266  298586 cri.go:89] found id: ""
	I1209 23:19:09.650275  298586 logs.go:282] 1 containers: [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4]
	I1209 23:19:09.650336  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.654062  298586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:19:09.654193  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:19:09.697026  298586 cri.go:89] found id: "bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:19:09.697049  298586 cri.go:89] found id: ""
	I1209 23:19:09.697057  298586 logs.go:282] 1 containers: [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3]
	I1209 23:19:09.697123  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.700847  298586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:19:09.700924  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:19:09.745769  298586 cri.go:89] found id: "60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:19:09.745792  298586 cri.go:89] found id: ""
	I1209 23:19:09.745801  298586 logs.go:282] 1 containers: [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a]
	I1209 23:19:09.745870  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.749699  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:19:09.749776  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:19:09.788613  298586 cri.go:89] found id: "9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:19:09.788640  298586 cri.go:89] found id: ""
	I1209 23:19:09.788649  298586 logs.go:282] 1 containers: [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55]
	I1209 23:19:09.788714  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.792628  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:19:09.792714  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:19:09.838079  298586 cri.go:89] found id: "07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:19:09.838100  298586 cri.go:89] found id: ""
	I1209 23:19:09.838109  298586 logs.go:282] 1 containers: [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143]
	I1209 23:19:09.838171  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.842101  298586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:19:09.842230  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:19:09.885915  298586 cri.go:89] found id: "ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:19:09.885936  298586 cri.go:89] found id: ""
	I1209 23:19:09.885945  298586 logs.go:282] 1 containers: [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7]
	I1209 23:19:09.886005  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.889892  298586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:19:09.889963  298586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:19:09.930139  298586 cri.go:89] found id: "46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:19:09.930159  298586 cri.go:89] found id: ""
	I1209 23:19:09.930167  298586 logs.go:282] 1 containers: [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c]
	I1209 23:19:09.930224  298586 ssh_runner.go:195] Run: which crictl
	I1209 23:19:09.933971  298586 logs.go:123] Gathering logs for kubelet ...
	I1209 23:19:09.933995  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 23:19:10.006231  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967476    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.006481  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:19:10.006665  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.006890  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:19:10.008211  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.008421  298586 logs.go:138] Found kubelet problem: Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:19:10.048853  298586 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:19:10.048893  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:19:10.202155  298586 logs.go:123] Gathering logs for kube-proxy [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143] ...
	I1209 23:19:10.202186  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143"
	I1209 23:19:10.246024  298586 logs.go:123] Gathering logs for kube-controller-manager [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7] ...
	I1209 23:19:10.246071  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7"
	I1209 23:19:10.315867  298586 logs.go:123] Gathering logs for container status ...
	I1209 23:19:10.315906  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:19:10.374661  298586 logs.go:123] Gathering logs for kindnet [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c] ...
	I1209 23:19:10.374699  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c"
	I1209 23:19:10.419539  298586 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:19:10.419576  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:19:10.521679  298586 logs.go:123] Gathering logs for dmesg ...
	I1209 23:19:10.521728  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:19:10.542385  298586 logs.go:123] Gathering logs for kube-apiserver [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4] ...
	I1209 23:19:10.542419  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4"
	I1209 23:19:10.622348  298586 logs.go:123] Gathering logs for etcd [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3] ...
	I1209 23:19:10.622386  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3"
	I1209 23:19:10.671460  298586 logs.go:123] Gathering logs for coredns [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a] ...
	I1209 23:19:10.671498  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a"
	I1209 23:19:10.736896  298586 logs.go:123] Gathering logs for kube-scheduler [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55] ...
	I1209 23:19:10.736936  298586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55"
	I1209 23:19:10.811487  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:19:10.811520  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 23:19:10.811602  298586 out.go:270] X Problems detected in kubelet:
	W1209 23:19:10.811617  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967533    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:19:10.811743  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: W1209 23:17:05.967591    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.811760  298586 out.go:270]   Dec 09 23:17:05 addons-006125 kubelet[1514]: E1209 23:17:05.967605    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	W1209 23:19:10.811767  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: W1209 23:17:06.035381    1514 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-006125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-006125' and this object
	W1209 23:19:10.811796  298586 out.go:270]   Dec 09 23:17:06 addons-006125 kubelet[1514]: E1209 23:17:06.035439    1514 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-006125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-006125' and this object" logger="UnhandledError"
	I1209 23:19:10.811803  298586 out.go:358] Setting ErrFile to fd 2...
	I1209 23:19:10.811815  298586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:19:20.823707  298586 system_pods.go:59] 18 kube-system pods found
	I1209 23:19:20.823748  298586 system_pods.go:61] "coredns-7c65d6cfc9-ps5kv" [ac846172-1271-42aa-9357-9fe66f96d82e] Running
	I1209 23:19:20.823755  298586 system_pods.go:61] "csi-hostpath-attacher-0" [70f07398-e2e6-43c0-8bac-4a6293846c83] Running
	I1209 23:19:20.823760  298586 system_pods.go:61] "csi-hostpath-resizer-0" [34a1d7a1-e66c-4126-8d08-31dd378e5ea6] Running
	I1209 23:19:20.823764  298586 system_pods.go:61] "csi-hostpathplugin-2lbjj" [7f67705b-ffe1-4e1b-a0f2-8bab862d22e9] Running
	I1209 23:19:20.823768  298586 system_pods.go:61] "etcd-addons-006125" [a56e9231-50e7-4e8d-a3c4-93ed5da068e1] Running
	I1209 23:19:20.823772  298586 system_pods.go:61] "kindnet-pshzw" [39fdd361-24a9-4d74-b04d-46b9b70eca6b] Running
	I1209 23:19:20.823799  298586 system_pods.go:61] "kube-apiserver-addons-006125" [df184bbd-bc28-424b-ba7a-bafa22eb9cfc] Running
	I1209 23:19:20.823811  298586 system_pods.go:61] "kube-controller-manager-addons-006125" [1bdae08d-3326-47dc-b837-2887ecec58fa] Running
	I1209 23:19:20.823815  298586 system_pods.go:61] "kube-ingress-dns-minikube" [bfdbb59b-5556-4a7d-87bc-bdfcb11a73cf] Running
	I1209 23:19:20.823821  298586 system_pods.go:61] "kube-proxy-sp7fm" [5e14145c-69bb-4925-9fc5-5222465c4f5c] Running
	I1209 23:19:20.823827  298586 system_pods.go:61] "kube-scheduler-addons-006125" [02d6d85b-621d-44f8-9ab2-7937ef0626bb] Running
	I1209 23:19:20.823832  298586 system_pods.go:61] "metrics-server-84c5f94fbc-mh6kg" [028d5ed7-2cbe-4a41-9585-89a1da10129a] Running
	I1209 23:19:20.823839  298586 system_pods.go:61] "nvidia-device-plugin-daemonset-nqsf9" [ae3a9e66-1569-459a-8a4c-25e166bd28a9] Running
	I1209 23:19:20.823843  298586 system_pods.go:61] "registry-5cc95cd69-s95j5" [0e371bb5-f973-4496-b0af-810240c01f88] Running
	I1209 23:19:20.823846  298586 system_pods.go:61] "registry-proxy-m54xt" [c65f40cc-4e12-46bd-a8c7-12d30baa522c] Running
	I1209 23:19:20.823851  298586 system_pods.go:61] "snapshot-controller-56fcc65765-8jbrz" [97fd1ee6-328a-437b-9179-923246db9b8c] Running
	I1209 23:19:20.823875  298586 system_pods.go:61] "snapshot-controller-56fcc65765-vkc6z" [157f0d6a-9131-4c4a-a3a7-af4d21263013] Running
	I1209 23:19:20.823887  298586 system_pods.go:61] "storage-provisioner" [169f26d0-1747-4bb8-90ce-17759ea05d6b] Running
	I1209 23:19:20.823894  298586 system_pods.go:74] duration metric: took 11.23034998s to wait for pod list to return data ...
	I1209 23:19:20.823907  298586 default_sa.go:34] waiting for default service account to be created ...
	I1209 23:19:20.826774  298586 default_sa.go:45] found service account: "default"
	I1209 23:19:20.826802  298586 default_sa.go:55] duration metric: took 2.88837ms for default service account to be created ...
	I1209 23:19:20.826812  298586 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 23:19:20.837425  298586 system_pods.go:86] 18 kube-system pods found
	I1209 23:19:20.837461  298586 system_pods.go:89] "coredns-7c65d6cfc9-ps5kv" [ac846172-1271-42aa-9357-9fe66f96d82e] Running
	I1209 23:19:20.837470  298586 system_pods.go:89] "csi-hostpath-attacher-0" [70f07398-e2e6-43c0-8bac-4a6293846c83] Running
	I1209 23:19:20.837483  298586 system_pods.go:89] "csi-hostpath-resizer-0" [34a1d7a1-e66c-4126-8d08-31dd378e5ea6] Running
	I1209 23:19:20.837508  298586 system_pods.go:89] "csi-hostpathplugin-2lbjj" [7f67705b-ffe1-4e1b-a0f2-8bab862d22e9] Running
	I1209 23:19:20.837522  298586 system_pods.go:89] "etcd-addons-006125" [a56e9231-50e7-4e8d-a3c4-93ed5da068e1] Running
	I1209 23:19:20.837528  298586 system_pods.go:89] "kindnet-pshzw" [39fdd361-24a9-4d74-b04d-46b9b70eca6b] Running
	I1209 23:19:20.837533  298586 system_pods.go:89] "kube-apiserver-addons-006125" [df184bbd-bc28-424b-ba7a-bafa22eb9cfc] Running
	I1209 23:19:20.837541  298586 system_pods.go:89] "kube-controller-manager-addons-006125" [1bdae08d-3326-47dc-b837-2887ecec58fa] Running
	I1209 23:19:20.837549  298586 system_pods.go:89] "kube-ingress-dns-minikube" [bfdbb59b-5556-4a7d-87bc-bdfcb11a73cf] Running
	I1209 23:19:20.837554  298586 system_pods.go:89] "kube-proxy-sp7fm" [5e14145c-69bb-4925-9fc5-5222465c4f5c] Running
	I1209 23:19:20.837559  298586 system_pods.go:89] "kube-scheduler-addons-006125" [02d6d85b-621d-44f8-9ab2-7937ef0626bb] Running
	I1209 23:19:20.837582  298586 system_pods.go:89] "metrics-server-84c5f94fbc-mh6kg" [028d5ed7-2cbe-4a41-9585-89a1da10129a] Running
	I1209 23:19:20.837595  298586 system_pods.go:89] "nvidia-device-plugin-daemonset-nqsf9" [ae3a9e66-1569-459a-8a4c-25e166bd28a9] Running
	I1209 23:19:20.837613  298586 system_pods.go:89] "registry-5cc95cd69-s95j5" [0e371bb5-f973-4496-b0af-810240c01f88] Running
	I1209 23:19:20.837623  298586 system_pods.go:89] "registry-proxy-m54xt" [c65f40cc-4e12-46bd-a8c7-12d30baa522c] Running
	I1209 23:19:20.837627  298586 system_pods.go:89] "snapshot-controller-56fcc65765-8jbrz" [97fd1ee6-328a-437b-9179-923246db9b8c] Running
	I1209 23:19:20.837631  298586 system_pods.go:89] "snapshot-controller-56fcc65765-vkc6z" [157f0d6a-9131-4c4a-a3a7-af4d21263013] Running
	I1209 23:19:20.837639  298586 system_pods.go:89] "storage-provisioner" [169f26d0-1747-4bb8-90ce-17759ea05d6b] Running
	I1209 23:19:20.837646  298586 system_pods.go:126] duration metric: took 10.827039ms to wait for k8s-apps to be running ...
	I1209 23:19:20.837660  298586 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 23:19:20.837734  298586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:19:20.850687  298586 system_svc.go:56] duration metric: took 13.018367ms WaitForService to wait for kubelet
	I1209 23:19:20.850720  298586 kubeadm.go:582] duration metric: took 2m32.849436375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:19:20.850740  298586 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:19:20.855500  298586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 23:19:20.855547  298586 node_conditions.go:123] node cpu capacity is 2
	I1209 23:19:20.855570  298586 node_conditions.go:105] duration metric: took 4.823482ms to run NodePressure ...
	I1209 23:19:20.855584  298586 start.go:241] waiting for startup goroutines ...
	I1209 23:19:20.855592  298586 start.go:246] waiting for cluster config update ...
	I1209 23:19:20.855614  298586 start.go:255] writing updated cluster config ...
	I1209 23:19:20.855923  298586 ssh_runner.go:195] Run: rm -f paused
	I1209 23:19:21.235289  298586 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 23:19:21.237691  298586 out.go:177] * Done! kubectl is now configured to use "addons-006125" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 23:23:58 addons-006125 crio[961]: time="2024-12-09 23:23:58.123060559Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-7wrst Namespace:ingress-nginx ID:35b55c586f0532f82d9ea1d062ade531f9774904e0820bca01e8c794edd54a6c UID:a7d66ae5-fce2-4a9f-a0a2-c1e905897db9 NetNS:/var/run/netns/5db7ead6-1d71-43cb-b1cf-b59d88f8ad44 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 09 23:23:58 addons-006125 crio[961]: time="2024-12-09 23:23:58.123237004Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-7wrst from CNI network \"kindnet\" (type=ptp)"
	Dec 09 23:23:58 addons-006125 crio[961]: time="2024-12-09 23:23:58.155481454Z" level=info msg="Stopped pod sandbox: 35b55c586f0532f82d9ea1d062ade531f9774904e0820bca01e8c794edd54a6c" id=2b0dcb07-81e8-458b-b2a5-12971ca13ec8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:23:58 addons-006125 crio[961]: time="2024-12-09 23:23:58.283085749Z" level=info msg="Removing container: 61562f3c7ba8cbb2298ba4431eb904d251c6ff796f2c383e62148695d836d8a9" id=6a62108b-a49c-42b8-8ea6-72cd47128a64 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:23:58 addons-006125 crio[961]: time="2024-12-09 23:23:58.302328556Z" level=info msg="Removed container 61562f3c7ba8cbb2298ba4431eb904d251c6ff796f2c383e62148695d836d8a9: ingress-nginx/ingress-nginx-controller-5f85ff4588-7wrst/controller" id=6a62108b-a49c-42b8-8ea6-72cd47128a64 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.710567045Z" level=info msg="Removing container: a81f2583f2542971631037d0cc2f8a566916475d4bcc75e799d93f7566ba2ea5" id=7153486d-ba7b-43d3-b242-c93d7e7b5f9b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.737768737Z" level=info msg="Removed container a81f2583f2542971631037d0cc2f8a566916475d4bcc75e799d93f7566ba2ea5: ingress-nginx/ingress-nginx-admission-patch-xmxnz/patch" id=7153486d-ba7b-43d3-b242-c93d7e7b5f9b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.739280106Z" level=info msg="Removing container: 8261124c915515f1f3a9908af08ca8caf794b4948ebeadd0f8bd1a1d3eed2cab" id=652a1b67-546c-4336-b129-67fc35dc069e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.758480139Z" level=info msg="Removed container 8261124c915515f1f3a9908af08ca8caf794b4948ebeadd0f8bd1a1d3eed2cab: ingress-nginx/ingress-nginx-admission-create-ss8p4/create" id=652a1b67-546c-4336-b129-67fc35dc069e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.759937091Z" level=info msg="Stopping pod sandbox: 35b55c586f0532f82d9ea1d062ade531f9774904e0820bca01e8c794edd54a6c" id=8f4a08c6-bf7a-4f55-8c64-5039d944609c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.759977584Z" level=info msg="Stopped pod sandbox (already stopped): 35b55c586f0532f82d9ea1d062ade531f9774904e0820bca01e8c794edd54a6c" id=8f4a08c6-bf7a-4f55-8c64-5039d944609c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.760476256Z" level=info msg="Removing pod sandbox: 35b55c586f0532f82d9ea1d062ade531f9774904e0820bca01e8c794edd54a6c" id=6cc856f0-a1f1-44b3-ab59-5ec051bc6418 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.769251447Z" level=info msg="Removed pod sandbox: 35b55c586f0532f82d9ea1d062ade531f9774904e0820bca01e8c794edd54a6c" id=6cc856f0-a1f1-44b3-ab59-5ec051bc6418 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.769772134Z" level=info msg="Stopping pod sandbox: 425a6b62f351acf7f368e3556bf013bbbbad32160d17b156f46c75921b71bf6b" id=f773a1b8-77cb-4491-9b94-af6395a4654c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.769808097Z" level=info msg="Stopped pod sandbox (already stopped): 425a6b62f351acf7f368e3556bf013bbbbad32160d17b156f46c75921b71bf6b" id=f773a1b8-77cb-4491-9b94-af6395a4654c name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.770174698Z" level=info msg="Removing pod sandbox: 425a6b62f351acf7f368e3556bf013bbbbad32160d17b156f46c75921b71bf6b" id=63ea5931-7f0e-43f2-aff9-1f48b0f314a4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.780313630Z" level=info msg="Removed pod sandbox: 425a6b62f351acf7f368e3556bf013bbbbad32160d17b156f46c75921b71bf6b" id=63ea5931-7f0e-43f2-aff9-1f48b0f314a4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.780917001Z" level=info msg="Stopping pod sandbox: ac9f416aea69178aa7d785a614ed08efaa31d7fc6c19e849823d4eb26f015f68" id=5d628bb2-ee31-4ea1-b854-8c4a26ff6f2f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.780954638Z" level=info msg="Stopped pod sandbox (already stopped): ac9f416aea69178aa7d785a614ed08efaa31d7fc6c19e849823d4eb26f015f68" id=5d628bb2-ee31-4ea1-b854-8c4a26ff6f2f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.781299159Z" level=info msg="Removing pod sandbox: ac9f416aea69178aa7d785a614ed08efaa31d7fc6c19e849823d4eb26f015f68" id=0f472f27-5dcc-43be-b965-feecfded5129 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.791057023Z" level=info msg="Removed pod sandbox: ac9f416aea69178aa7d785a614ed08efaa31d7fc6c19e849823d4eb26f015f68" id=0f472f27-5dcc-43be-b965-feecfded5129 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.791771313Z" level=info msg="Stopping pod sandbox: 181af42509b63edfbffaa7ec10b490a941438bcef800bcf677e27c4e76d33f94" id=0153cc7b-8c55-462c-9978-7cb22af328f4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.791810378Z" level=info msg="Stopped pod sandbox (already stopped): 181af42509b63edfbffaa7ec10b490a941438bcef800bcf677e27c4e76d33f94" id=0153cc7b-8c55-462c-9978-7cb22af328f4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.792216241Z" level=info msg="Removing pod sandbox: 181af42509b63edfbffaa7ec10b490a941438bcef800bcf677e27c4e76d33f94" id=76356b3b-05d3-4191-87f6-d2f8e88dc628 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:24:43 addons-006125 crio[961]: time="2024-12-09 23:24:43.803732033Z" level=info msg="Removed pod sandbox: 181af42509b63edfbffaa7ec10b490a941438bcef800bcf677e27c4e76d33f94" id=76356b3b-05d3-4191-87f6-d2f8e88dc628 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	864e68f55a0c6       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   25fedac5b784d       hello-world-app-55bf9c44b4-crwqk
	83be2b3d47e63       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         4 minutes ago        Running             nginx                     0                   6ce328d60f74b       nginx
	c613f71753421       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago        Running             busybox                   0                   f251b28a646eb       busybox
	8191bfee51953       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   8 minutes ago        Running             metrics-server            0                   441138a11451a       metrics-server-84c5f94fbc-mh6kg
	60f210579139f       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        8 minutes ago        Running             coredns                   0                   8f608a61a5b01       coredns-7c65d6cfc9-ps5kv
	83cc359be952d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago        Running             storage-provisioner       0                   fb9951ed37ad7       storage-provisioner
	46d2df09c6c2d       docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e                      8 minutes ago        Running             kindnet-cni               0                   acba07dda4adc       kindnet-pshzw
	07527f58e4332       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                        8 minutes ago        Running             kube-proxy                0                   577c7b2136793       kube-proxy-sp7fm
	28972e4f2344f       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                        9 minutes ago        Running             kube-apiserver            0                   e36e3239239f7       kube-apiserver-addons-006125
	9591912eda249       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                        9 minutes ago        Running             kube-scheduler            0                   f9945e404cb30       kube-scheduler-addons-006125
	ebbf87552a8cf       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                        9 minutes ago        Running             kube-controller-manager   0                   e96afb4daa5ed       kube-controller-manager-addons-006125
	bf6e7cfb9e6ee       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        9 minutes ago        Running             etcd                      0                   c79995c7d2ab0       etcd-addons-006125
	
	
	==> coredns [60f210579139fb360942f30a6f0044c6c2adf61c617844e87e828739405e7a0a] <==
	[INFO] 10.244.0.21:53860 - 49200 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000172416s
	[INFO] 10.244.0.21:53860 - 4540 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000470414s
	[INFO] 10.244.0.21:53860 - 19020 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000088731s
	[INFO] 10.244.0.21:53860 - 2248 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044965s
	[INFO] 10.244.0.21:53860 - 3599 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00140096s
	[INFO] 10.244.0.21:53860 - 54338 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00173483s
	[INFO] 10.244.0.21:53860 - 25680 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054705s
	[INFO] 10.244.0.21:59364 - 12589 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000122619s
	[INFO] 10.244.0.21:53418 - 45442 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000240954s
	[INFO] 10.244.0.21:53418 - 16713 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070582s
	[INFO] 10.244.0.21:59364 - 27724 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048698s
	[INFO] 10.244.0.21:53418 - 26937 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035397s
	[INFO] 10.244.0.21:53418 - 39020 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000097273s
	[INFO] 10.244.0.21:59364 - 45536 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039279s
	[INFO] 10.244.0.21:53418 - 37763 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068275s
	[INFO] 10.244.0.21:59364 - 46234 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040771s
	[INFO] 10.244.0.21:53418 - 32326 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038392s
	[INFO] 10.244.0.21:59364 - 41402 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048271s
	[INFO] 10.244.0.21:59364 - 40069 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040345s
	[INFO] 10.244.0.21:53418 - 7992 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004281189s
	[INFO] 10.244.0.21:59364 - 22987 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003231553s
	[INFO] 10.244.0.21:53418 - 61200 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001423967s
	[INFO] 10.244.0.21:59364 - 6065 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001761539s
	[INFO] 10.244.0.21:53418 - 13946 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000085564s
	[INFO] 10.244.0.21:59364 - 31701 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000139349s
	
	
	==> describe nodes <==
	Name:               addons-006125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-006125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=addons-006125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_16_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-006125
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:16:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-006125
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:25:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:24:21 +0000   Mon, 09 Dec 2024 23:16:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:24:21 +0000   Mon, 09 Dec 2024 23:16:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:24:21 +0000   Mon, 09 Dec 2024 23:16:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:24:21 +0000   Mon, 09 Dec 2024 23:17:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-006125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f86b1de6b3e42aca6bb81d86a17348e
	  System UUID:                27619cfe-0879-4c6d-8dce-4580b148df40
	  Boot ID:                    50e9d5fe-ba16-4119-8482-ef38225f12b8
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  default                     hello-world-app-55bf9c44b4-crwqk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 coredns-7c65d6cfc9-ps5kv                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m51s
	  kube-system                 etcd-addons-006125                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m58s
	  kube-system                 kindnet-pshzw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m52s
	  kube-system                 kube-apiserver-addons-006125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m58s
	  kube-system                 kube-controller-manager-addons-006125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m59s
	  kube-system                 kube-proxy-sp7fm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m52s
	  kube-system                 kube-scheduler-addons-006125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m59s
	  kube-system                 metrics-server-84c5f94fbc-mh6kg          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         8m48s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 8m46s  kube-proxy       
	  Normal   Starting                 8m58s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m58s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m58s  kubelet          Node addons-006125 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m58s  kubelet          Node addons-006125 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m58s  kubelet          Node addons-006125 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m54s  node-controller  Node addons-006125 event: Registered Node addons-006125 in Controller
	  Normal   NodeReady                8m36s  kubelet          Node addons-006125 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014264] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.469192] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028174] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.034435] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016734] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.679647] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.654401] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 9 22:21] hrtimer: interrupt took 5553077 ns
	[Dec 9 22:45] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [bf6e7cfb9e6ee2fb864ff818f106db453fa4b47a341711d9d9c56e57ce93bce3] <==
	{"level":"info","ts":"2024-12-09T23:16:36.724006Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:16:36.724532Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:16:36.725947Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:16:36.731127Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T23:16:36.731174Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T23:16:36.731254Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:16:36.731332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:16:36.731363Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:16:36.731999Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:16:36.732856Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-09T23:16:36.807366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-09T23:16:48.796525Z","caller":"traceutil/trace.go:171","msg":"trace[299301096] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"192.07546ms","start":"2024-12-09T23:16:48.604286Z","end":"2024-12-09T23:16:48.796361Z","steps":["trace[299301096] 'process raft request'  (duration: 124.270688ms)","trace[299301096] 'compare'  (duration: 35.029267ms)","trace[299301096] 'attach lease to kv pair' {req_type:put; key:/registry/minions/addons-006125; req_size:5728; } (duration: 32.631594ms)"],"step_count":3}
	{"level":"info","ts":"2024-12-09T23:16:48.882475Z","caller":"traceutil/trace.go:171","msg":"trace[815781161] linearizableReadLoop","detail":"{readStateIndex:322; appliedIndex:320; }","duration":"225.373031ms","start":"2024-12-09T23:16:48.657091Z","end":"2024-12-09T23:16:48.882464Z","steps":["trace[815781161] 'read index received'  (duration: 71.71146ms)","trace[815781161] 'applied index is now lower than readState.Index'  (duration: 153.660964ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:16:48.882624Z","caller":"traceutil/trace.go:171","msg":"trace[2077674660] transaction","detail":"{read_only:false; response_revision:312; number_of_response:1; }","duration":"251.955611ms","start":"2024-12-09T23:16:48.630658Z","end":"2024-12-09T23:16:48.882613Z","steps":["trace[2077674660] 'process raft request'  (duration: 251.71738ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:48.882738Z","caller":"traceutil/trace.go:171","msg":"trace[273352616] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"225.586104ms","start":"2024-12-09T23:16:48.657145Z","end":"2024-12-09T23:16:48.882731Z","steps":["trace[273352616] 'process raft request'  (duration: 225.297493ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:16:48.882861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.755748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-12-09T23:16:48.882902Z","caller":"traceutil/trace.go:171","msg":"trace[188832223] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:313; }","duration":"225.80753ms","start":"2024-12-09T23:16:48.657086Z","end":"2024-12-09T23:16:48.882894Z","steps":["trace[188832223] 'agreement among raft nodes before linearized reading'  (duration: 225.715845ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:16:48.943514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.352967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-12-09T23:16:48.943577Z","caller":"traceutil/trace.go:171","msg":"trace[2047370642] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:315; }","duration":"147.425355ms","start":"2024-12-09T23:16:48.796140Z","end":"2024-12-09T23:16:48.943565Z","steps":["trace[2047370642] 'agreement among raft nodes before linearized reading'  (duration: 147.312886ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:48.943846Z","caller":"traceutil/trace.go:171","msg":"trace[549763088] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"147.654207ms","start":"2024-12-09T23:16:48.796182Z","end":"2024-12-09T23:16:48.943837Z","steps":["trace[549763088] 'process raft request'  (duration: 134.700603ms)","trace[549763088] 'compare'  (duration: 12.45608ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:16:48.943951Z","caller":"traceutil/trace.go:171","msg":"trace[2056228274] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"147.665505ms","start":"2024-12-09T23:16:48.796278Z","end":"2024-12-09T23:16:48.943944Z","steps":["trace[2056228274] 'process raft request'  (duration: 147.142102ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:51.951702Z","caller":"traceutil/trace.go:171","msg":"trace[1939232064] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"181.261738ms","start":"2024-12-09T23:16:51.770423Z","end":"2024-12-09T23:16:51.951685Z","steps":["trace[1939232064] 'process raft request'  (duration: 158.980378ms)","trace[1939232064] 'compare'  (duration: 21.888592ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:16:51.952073Z","caller":"traceutil/trace.go:171","msg":"trace[1404595988] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"165.556225ms","start":"2024-12-09T23:16:51.786509Z","end":"2024-12-09T23:16:51.952065Z","steps":["trace[1404595988] 'process raft request'  (duration: 164.88073ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:51.952459Z","caller":"traceutil/trace.go:171","msg":"trace[562913129] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"165.761799ms","start":"2024-12-09T23:16:51.786688Z","end":"2024-12-09T23:16:51.952450Z","steps":["trace[562913129] 'process raft request'  (duration: 164.951303ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:16:51.952610Z","caller":"traceutil/trace.go:171","msg":"trace[1796416080] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"165.529484ms","start":"2024-12-09T23:16:51.787075Z","end":"2024-12-09T23:16:51.952604Z","steps":["trace[1796416080] 'process raft request'  (duration: 164.946281ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:25:41 up  2:08,  0 users,  load average: 0.40, 1.22, 1.99
	Linux addons-006125 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [46d2df09c6c2d2f01b9dc1880e9c995d56d62694fcd3d43d232c9f51d1ca8b6c] <==
	I1209 23:23:35.460467       1 main.go:301] handling current node
	I1209 23:23:45.467728       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:23:45.467766       1 main.go:301] handling current node
	I1209 23:23:55.461208       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:23:55.461328       1 main.go:301] handling current node
	I1209 23:24:05.463447       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:24:05.463659       1 main.go:301] handling current node
	I1209 23:24:15.465582       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:24:15.465616       1 main.go:301] handling current node
	I1209 23:24:25.460413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:24:25.460474       1 main.go:301] handling current node
	I1209 23:24:35.464382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:24:35.464515       1 main.go:301] handling current node
	I1209 23:24:45.463606       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:24:45.463738       1 main.go:301] handling current node
	I1209 23:24:55.460477       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:24:55.460510       1 main.go:301] handling current node
	I1209 23:25:05.460412       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:25:05.460532       1 main.go:301] handling current node
	I1209 23:25:15.460412       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:25:15.460533       1 main.go:301] handling current node
	I1209 23:25:25.460415       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:25:25.460610       1 main.go:301] handling current node
	I1209 23:25:35.460989       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:25:35.461071       1 main.go:301] handling current node
	
	
	==> kube-apiserver [28972e4f2344f1922643dc402385704214e88f846cefbce364db88706b9345c4] <==
	I1209 23:20:07.594201       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.155.147"}
	E1209 23:20:10.646387       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1209 23:20:10.663522       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1209 23:20:10.700509       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1209 23:20:25.677190       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1209 23:20:54.968554       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 23:21:09.089549       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.089706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:21:09.116632       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.116686       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:21:09.153939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.154095       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:21:09.197657       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.197774       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:21:09.230308       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:21:09.230451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 23:21:10.198444       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1209 23:21:10.230566       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 23:21:10.288616       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1209 23:21:22.848323       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 23:21:23.888376       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1209 23:21:28.515637       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 23:21:28.849350       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.48.252"}
	I1209 23:23:49.984903       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.244.252"}
	E1209 23:23:55.040971       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [ebbf87552a8cfd170a3f8cba836837c2b489539f76cbd2b02a04ac3c6e0607c7] <==
	I1209 23:23:52.294571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.857768ms"
	I1209 23:23:52.295594       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="56.32µs"
	I1209 23:23:54.934343       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I1209 23:23:54.940819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5f85ff4588" duration="7.943µs"
	I1209 23:23:54.945948       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W1209 23:23:55.637303       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:23:55.637352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 23:24:05.241556       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I1209 23:24:21.708394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-006125"
	W1209 23:24:22.917820       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:24:22.917861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:24:25.290029       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:24:25.290068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:24:32.705893       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:24:32.705936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:24:41.674820       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:24:41.674950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:25:05.951468       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:25:05.951621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:25:08.731462       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:25:08.731505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:25:12.543654       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:25:12.543777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:25:20.443069       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:25:20.443146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [07527f58e4332815841f89503806bdccb0e9f16db6618f0a47da4a02a53c6143] <==
	I1209 23:16:52.879951       1 server_linux.go:66] "Using iptables proxy"
	I1209 23:16:53.896500       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1209 23:16:53.981292       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:16:54.878461       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 23:16:54.878600       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:16:54.881840       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:16:54.882677       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:16:54.882759       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:16:54.885691       1 config.go:199] "Starting service config controller"
	I1209 23:16:54.885779       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:16:54.885811       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:16:54.885816       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:16:54.886677       1 config.go:328] "Starting node config controller"
	I1209 23:16:54.886731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:16:54.995178       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:16:55.004280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 23:16:54.987245       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9591912eda249ef0702e5c6d735086277958194370e72a1fddb4b2529fda6a55] <==
	W1209 23:16:40.837541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:16:40.838565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:40.837582       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:16:40.838635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:40.837624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 23:16:40.838713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.694127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:16:41.694172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.707528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 23:16:41.707642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.773998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 23:16:41.774060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.784090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:16:41.784134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.801202       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 23:16:41.801247       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 23:16:41.859930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 23:16:41.859992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.879327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 23:16:41.879375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.890899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 23:16:41.891011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 23:16:41.929521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 23:16:41.929637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 23:16:43.609133       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 23:23:58 addons-006125 kubelet[1514]: I1209 23:23:58.394853    1514 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-42hnt\" (UniqueName: \"kubernetes.io/projected/a7d66ae5-fce2-4a9f-a0a2-c1e905897db9-kube-api-access-42hnt\") on node \"addons-006125\" DevicePath \"\""
	Dec 09 23:23:59 addons-006125 kubelet[1514]: I1209 23:23:59.222584    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7d66ae5-fce2-4a9f-a0a2-c1e905897db9" path="/var/lib/kubelet/pods/a7d66ae5-fce2-4a9f-a0a2-c1e905897db9/volumes"
	Dec 09 23:24:03 addons-006125 kubelet[1514]: E1209 23:24:03.425587    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786643425265266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:03 addons-006125 kubelet[1514]: E1209 23:24:03.425624    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786643425265266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:13 addons-006125 kubelet[1514]: E1209 23:24:13.428742    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786653428453189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:13 addons-006125 kubelet[1514]: E1209 23:24:13.428777    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786653428453189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:23 addons-006125 kubelet[1514]: E1209 23:24:23.431919    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786663431404436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:23 addons-006125 kubelet[1514]: E1209 23:24:23.431960    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786663431404436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:33 addons-006125 kubelet[1514]: E1209 23:24:33.434252    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786673433979507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:33 addons-006125 kubelet[1514]: E1209 23:24:33.434290    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786673433979507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:43 addons-006125 kubelet[1514]: E1209 23:24:43.437848    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786683437597432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:43 addons-006125 kubelet[1514]: E1209 23:24:43.437886    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786683437597432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:43 addons-006125 kubelet[1514]: I1209 23:24:43.709478    1514 scope.go:117] "RemoveContainer" containerID="a81f2583f2542971631037d0cc2f8a566916475d4bcc75e799d93f7566ba2ea5"
	Dec 09 23:24:43 addons-006125 kubelet[1514]: I1209 23:24:43.738042    1514 scope.go:117] "RemoveContainer" containerID="8261124c915515f1f3a9908af08ca8caf794b4948ebeadd0f8bd1a1d3eed2cab"
	Dec 09 23:24:46 addons-006125 kubelet[1514]: I1209 23:24:46.220576    1514 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 23:24:53 addons-006125 kubelet[1514]: E1209 23:24:53.440957    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786693440729907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:24:53 addons-006125 kubelet[1514]: E1209 23:24:53.440994    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786693440729907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:25:03 addons-006125 kubelet[1514]: E1209 23:25:03.443359    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786703443046157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:25:03 addons-006125 kubelet[1514]: E1209 23:25:03.443403    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786703443046157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:25:13 addons-006125 kubelet[1514]: E1209 23:25:13.447219    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786713446418137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:25:13 addons-006125 kubelet[1514]: E1209 23:25:13.447255    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786713446418137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:25:23 addons-006125 kubelet[1514]: E1209 23:25:23.450027    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786723449777380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:25:23 addons-006125 kubelet[1514]: E1209 23:25:23.450069    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786723449777380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:25:33 addons-006125 kubelet[1514]: E1209 23:25:33.453136    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786733452880248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:25:33 addons-006125 kubelet[1514]: E1209 23:25:33.453177    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733786733452880248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [83cc359be952de1780a0d0711ba6424c0cc5987de64528fa84096cb7fbc2c1b0] <==
	I1209 23:17:06.986075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:17:07.030066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:17:07.030210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:17:07.050343       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:17:07.051402       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-006125_33b22b2d-39d5-4d44-8f73-be6eefb60b1a!
	I1209 23:17:07.051550       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40f924e7-c661-441d-bbbc-9188fb45d87d", APIVersion:"v1", ResourceVersion:"875", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-006125_33b22b2d-39d5-4d44-8f73-be6eefb60b1a became leader
	I1209 23:17:07.153203       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-006125_33b22b2d-39d5-4d44-8f73-be6eefb60b1a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006125 -n addons-006125
helpers_test.go:261: (dbg) Run:  kubectl --context addons-006125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (290.06s)

                                                
                                    

Test pass (297/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.1
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 6.62
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.09
18 TestDownloadOnly/v1.31.2/DeleteAll 0.22
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 213.3
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 11.93
35 TestAddons/parallel/Registry 17.12
37 TestAddons/parallel/InspektorGadget 11.83
40 TestAddons/parallel/CSI 51.92
41 TestAddons/parallel/Headlamp 17.78
42 TestAddons/parallel/CloudSpanner 6.77
43 TestAddons/parallel/LocalPath 53.55
44 TestAddons/parallel/NvidiaDevicePlugin 6.86
45 TestAddons/parallel/Yakd 10.81
47 TestAddons/StoppedEnableDisable 12.18
48 TestCertOptions 40.89
49 TestCertExpiration 252.04
51 TestForceSystemdFlag 43.18
52 TestForceSystemdEnv 37.77
58 TestErrorSpam/setup 29.94
59 TestErrorSpam/start 0.76
60 TestErrorSpam/status 1.1
61 TestErrorSpam/pause 1.86
62 TestErrorSpam/unpause 1.91
63 TestErrorSpam/stop 1.46
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 50.07
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 23.48
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.71
75 TestFunctional/serial/CacheCmd/cache/add_local 1.5
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.14
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 62.29
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.77
86 TestFunctional/serial/LogsFileCmd 1.78
87 TestFunctional/serial/InvalidService 4.93
89 TestFunctional/parallel/ConfigCmd 0.51
90 TestFunctional/parallel/DashboardCmd 14.3
91 TestFunctional/parallel/DryRun 0.62
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.24
97 TestFunctional/parallel/ServiceCmdConnect 13.75
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 26.1
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.44
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.05
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
113 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.49
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.27
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 8.08
130 TestFunctional/parallel/ServiceCmd/List 0.53
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.37
135 TestFunctional/parallel/MountCmd/specific-port 2.63
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.47
137 TestFunctional/parallel/Version/short 0.1
138 TestFunctional/parallel/Version/components 1.32
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 4
144 TestFunctional/parallel/ImageCommands/Setup 1.52
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.62
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.24
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.85
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.68
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 179.79
162 TestMultiControlPlane/serial/DeployApp 8.83
163 TestMultiControlPlane/serial/PingHostFromPods 1.61
164 TestMultiControlPlane/serial/AddWorkerNode 38.15
165 TestMultiControlPlane/serial/NodeLabels 0.11
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.99
167 TestMultiControlPlane/serial/CopyFile 19.3
168 TestMultiControlPlane/serial/StopSecondaryNode 12.8
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
170 TestMultiControlPlane/serial/RestartSecondaryNode 23.43
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.44
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 200.58
173 TestMultiControlPlane/serial/DeleteSecondaryNode 12.68
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
175 TestMultiControlPlane/serial/StopCluster 35.73
176 TestMultiControlPlane/serial/RestartCluster 100.91
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
178 TestMultiControlPlane/serial/AddSecondaryNode 75.22
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
183 TestJSONOutput/start/Command 49.98
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.84
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.25
208 TestKicCustomNetwork/create_custom_network 37.87
209 TestKicCustomNetwork/use_default_bridge_network 30.45
210 TestKicExistingNetwork 36.44
211 TestKicCustomSubnet 34.65
212 TestKicStaticIP 32.97
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 67.39
217 TestMountStart/serial/StartWithMountFirst 6.39
218 TestMountStart/serial/VerifyMountFirst 0.29
219 TestMountStart/serial/StartWithMountSecond 7.19
220 TestMountStart/serial/VerifyMountSecond 0.26
221 TestMountStart/serial/DeleteFirst 1.65
222 TestMountStart/serial/VerifyMountPostDelete 0.26
223 TestMountStart/serial/Stop 1.21
224 TestMountStart/serial/RestartStopped 7.8
225 TestMountStart/serial/VerifyMountPostStop 0.27
228 TestMultiNode/serial/FreshStart2Nodes 78.97
229 TestMultiNode/serial/DeployApp2Nodes 6.78
230 TestMultiNode/serial/PingHostFrom2Pods 0.99
231 TestMultiNode/serial/AddNode 28.37
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.69
234 TestMultiNode/serial/CopyFile 10.14
235 TestMultiNode/serial/StopNode 2.24
236 TestMultiNode/serial/StartAfterStop 9.5
237 TestMultiNode/serial/RestartKeepsNodes 127.81
238 TestMultiNode/serial/DeleteNode 5.53
239 TestMultiNode/serial/StopMultiNode 23.8
240 TestMultiNode/serial/RestartMultiNode 57.18
241 TestMultiNode/serial/ValidateNameConflict 32.25
246 TestPreload 129.24
248 TestScheduledStopUnix 105.46
251 TestInsufficientStorage 10.12
252 TestRunningBinaryUpgrade 76.57
254 TestKubernetesUpgrade 150.78
255 TestMissingContainerUpgrade 164.34
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 40.35
259 TestNoKubernetes/serial/StartWithStopK8s 8.61
260 TestNoKubernetes/serial/Start 8.28
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
262 TestNoKubernetes/serial/ProfileList 1.23
263 TestNoKubernetes/serial/Stop 1.27
264 TestNoKubernetes/serial/StartNoArgs 7.82
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
266 TestStoppedBinaryUpgrade/Setup 0.72
267 TestStoppedBinaryUpgrade/Upgrade 78.71
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.27
277 TestPause/serial/Start 64.09
278 TestPause/serial/SecondStartNoReconfiguration 20.48
279 TestPause/serial/Pause 1.37
280 TestPause/serial/VerifyStatus 0.44
281 TestPause/serial/Unpause 1.08
282 TestPause/serial/PauseAgain 1.83
283 TestPause/serial/DeletePaused 3.25
284 TestPause/serial/VerifyDeletedResources 0.31
292 TestNetworkPlugins/group/false 5.93
297 TestStartStop/group/old-k8s-version/serial/FirstStart 152.58
298 TestStartStop/group/old-k8s-version/serial/DeployApp 10.58
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
300 TestStartStop/group/old-k8s-version/serial/Stop 12.01
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
302 TestStartStop/group/old-k8s-version/serial/SecondStart 149.83
304 TestStartStop/group/no-preload/serial/FirstStart 71.77
305 TestStartStop/group/no-preload/serial/DeployApp 10.4
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
307 TestStartStop/group/no-preload/serial/Stop 12.03
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
309 TestStartStop/group/no-preload/serial/SecondStart 300.21
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
313 TestStartStop/group/old-k8s-version/serial/Pause 3.12
315 TestStartStop/group/embed-certs/serial/FirstStart 53.36
316 TestStartStop/group/embed-certs/serial/DeployApp 11.36
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
318 TestStartStop/group/embed-certs/serial/Stop 11.96
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
320 TestStartStop/group/embed-certs/serial/SecondStart 292.19
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
324 TestStartStop/group/no-preload/serial/Pause 3.13
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.8
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.36
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.02
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 297.05
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
335 TestStartStop/group/embed-certs/serial/Pause 3.29
337 TestStartStop/group/newest-cni/serial/FirstStart 35.62
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
340 TestStartStop/group/newest-cni/serial/Stop 1.3
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
342 TestStartStop/group/newest-cni/serial/SecondStart 15.45
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
346 TestStartStop/group/newest-cni/serial/Pause 3.21
347 TestNetworkPlugins/group/auto/Start 54.11
348 TestNetworkPlugins/group/auto/KubeletFlags 0.43
349 TestNetworkPlugins/group/auto/NetCatPod 10.28
350 TestNetworkPlugins/group/auto/DNS 0.2
351 TestNetworkPlugins/group/auto/Localhost 0.17
352 TestNetworkPlugins/group/auto/HairPin 0.17
353 TestNetworkPlugins/group/kindnet/Start 48.38
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
356 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
357 TestNetworkPlugins/group/kindnet/DNS 0.2
358 TestNetworkPlugins/group/kindnet/Localhost 0.16
359 TestNetworkPlugins/group/kindnet/HairPin 0.17
360 TestNetworkPlugins/group/calico/Start 69.77
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.15
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.73
365 TestNetworkPlugins/group/custom-flannel/Start 60.54
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.34
368 TestNetworkPlugins/group/calico/NetCatPod 12.3
369 TestNetworkPlugins/group/calico/DNS 0.21
370 TestNetworkPlugins/group/calico/Localhost 0.18
371 TestNetworkPlugins/group/calico/HairPin 0.16
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
374 TestNetworkPlugins/group/custom-flannel/DNS 0.28
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
377 TestNetworkPlugins/group/enable-default-cni/Start 82.24
378 TestNetworkPlugins/group/flannel/Start 59.01
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.49
382 TestNetworkPlugins/group/flannel/NetCatPod 13.39
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.41
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
387 TestNetworkPlugins/group/flannel/DNS 0.19
388 TestNetworkPlugins/group/flannel/Localhost 0.17
389 TestNetworkPlugins/group/flannel/HairPin 0.17
390 TestNetworkPlugins/group/bridge/Start 70.42
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
392 TestNetworkPlugins/group/bridge/NetCatPod 11.26
393 TestNetworkPlugins/group/bridge/DNS 0.17
394 TestNetworkPlugins/group/bridge/Localhost 0.16
395 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-328809 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-328809 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.103261114s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 23:15:38.866991  297827 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1209 23:15:38.867078  297827 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-328809
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-328809: exit status 85 (83.587992ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-328809 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |          |
	|         | -p download-only-328809        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:15:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:15:29.816473  297832 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:15:29.816884  297832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:15:29.816900  297832 out.go:358] Setting ErrFile to fd 2...
	I1209 23:15:29.816907  297832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:15:29.817161  297832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	W1209 23:15:29.817308  297832 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19888-292449/.minikube/config/config.json: open /home/jenkins/minikube-integration/19888-292449/.minikube/config/config.json: no such file or directory
	I1209 23:15:29.817772  297832 out.go:352] Setting JSON to true
	I1209 23:15:29.818636  297832 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7071,"bootTime":1733779059,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 23:15:29.818705  297832 start.go:139] virtualization:  
	I1209 23:15:29.821676  297832 out.go:97] [download-only-328809] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1209 23:15:29.821853  297832 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 23:15:29.821951  297832 notify.go:220] Checking for updates...
	I1209 23:15:29.824787  297832 out.go:169] MINIKUBE_LOCATION=19888
	I1209 23:15:29.827140  297832 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:15:29.828971  297832 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	I1209 23:15:29.830857  297832 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	I1209 23:15:29.832655  297832 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 23:15:29.836736  297832 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 23:15:29.837035  297832 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:15:29.869608  297832 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:15:29.869728  297832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:15:29.931762  297832 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 23:15:29.921751871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:15:29.931879  297832 docker.go:318] overlay module found
	I1209 23:15:29.933892  297832 out.go:97] Using the docker driver based on user configuration
	I1209 23:15:29.933921  297832 start.go:297] selected driver: docker
	I1209 23:15:29.933929  297832 start.go:901] validating driver "docker" against <nil>
	I1209 23:15:29.934039  297832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:15:29.997030  297832 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 23:15:29.979467395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:15:29.997283  297832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:15:29.997597  297832 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1209 23:15:29.997753  297832 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:15:29.999819  297832 out.go:169] Using Docker driver with root privileges
	I1209 23:15:30.015754  297832 cni.go:84] Creating CNI manager for ""
	I1209 23:15:30.015854  297832 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:15:30.015865  297832 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:15:30.015961  297832 start.go:340] cluster config:
	{Name:download-only-328809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-328809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:15:30.028960  297832 out.go:97] Starting "download-only-328809" primary control-plane node in "download-only-328809" cluster
	I1209 23:15:30.029010  297832 cache.go:121] Beginning downloading kic base image for docker with crio
	I1209 23:15:30.047461  297832 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:15:30.047537  297832 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:15:30.047761  297832 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:15:30.078758  297832 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:15:30.078987  297832 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:15:30.079094  297832 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:15:30.156155  297832 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1209 23:15:30.156199  297832 cache.go:56] Caching tarball of preloaded images
	I1209 23:15:30.156397  297832 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:15:30.159163  297832 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 23:15:30.159205  297832 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1209 23:15:30.249764  297832 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-328809 host does not exist
	  To start a cluster, run: "minikube start -p download-only-328809"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-328809
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (6.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-702821 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-702821 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.616259025s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (6.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 23:15:45.935337  297827 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1209 23:15:45.935377  297827 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-702821
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-702821: exit status 85 (90.095679ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-328809 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | -p download-only-328809        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	| delete  | -p download-only-328809        | download-only-328809 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
	| start   | -o=json --download-only        | download-only-702821 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC |                     |
	|         | -p download-only-702821        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:15:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:15:39.365902  298031 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:15:39.366142  298031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:15:39.366169  298031 out.go:358] Setting ErrFile to fd 2...
	I1209 23:15:39.366188  298031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:15:39.366484  298031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1209 23:15:39.366975  298031 out.go:352] Setting JSON to true
	I1209 23:15:39.368000  298031 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7081,"bootTime":1733779059,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 23:15:39.368107  298031 start.go:139] virtualization:  
	I1209 23:15:39.370898  298031 out.go:97] [download-only-702821] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 23:15:39.371158  298031 notify.go:220] Checking for updates...
	I1209 23:15:39.373529  298031 out.go:169] MINIKUBE_LOCATION=19888
	I1209 23:15:39.376073  298031 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:15:39.378359  298031 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	I1209 23:15:39.380570  298031 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	I1209 23:15:39.382629  298031 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 23:15:39.387224  298031 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 23:15:39.387528  298031 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:15:39.416764  298031 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:15:39.416877  298031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:15:39.474242  298031 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 23:15:39.464340107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:15:39.474364  298031 docker.go:318] overlay module found
	I1209 23:15:39.476641  298031 out.go:97] Using the docker driver based on user configuration
	I1209 23:15:39.476682  298031 start.go:297] selected driver: docker
	I1209 23:15:39.476690  298031 start.go:901] validating driver "docker" against <nil>
	I1209 23:15:39.476798  298031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:15:39.532950  298031 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 23:15:39.523541327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:15:39.533197  298031 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:15:39.533512  298031 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1209 23:15:39.533709  298031 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:15:39.535943  298031 out.go:169] Using Docker driver with root privileges
	I1209 23:15:39.537988  298031 cni.go:84] Creating CNI manager for ""
	I1209 23:15:39.538064  298031 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:15:39.538078  298031 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:15:39.538156  298031 start.go:340] cluster config:
	{Name:download-only-702821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-702821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:15:39.540338  298031 out.go:97] Starting "download-only-702821" primary control-plane node in "download-only-702821" cluster
	I1209 23:15:39.540376  298031 cache.go:121] Beginning downloading kic base image for docker with crio
	I1209 23:15:39.542118  298031 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:15:39.542164  298031 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:15:39.542267  298031 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:15:39.558966  298031 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:15:39.559093  298031 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:15:39.559129  298031 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1209 23:15:39.559135  298031 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1209 23:15:39.559179  298031 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1209 23:15:39.616251  298031 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1209 23:15:39.616278  298031 cache.go:56] Caching tarball of preloaded images
	I1209 23:15:39.617410  298031 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:15:39.619943  298031 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 23:15:39.619992  298031 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 ...
	I1209 23:15:39.704753  298031 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:810fe254d498dda367f4e14b5cba638f -> /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1209 23:15:44.315921  298031 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 ...
	I1209 23:15:44.316028  298031 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19888-292449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-702821 host does not exist
	  To start a cluster, run: "minikube start -p download-only-702821"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-702821
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 23:15:47.271866  297827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-505134 --alsologtostderr --binary-mirror http://127.0.0.1:33693 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-505134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-505134
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006125
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-006125: exit status 85 (88.663571ms)

                                                
                                                
-- stdout --
	* Profile "addons-006125" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006125"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006125
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-006125: exit status 85 (78.488641ms)

                                                
                                                
-- stdout --
	* Profile "addons-006125" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006125"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (213.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-006125 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-006125 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m33.302716645s)
--- PASS: TestAddons/Setup (213.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-006125 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-006125 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.93s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-006125 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-006125 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5c202634-18e2-4348-a013-259834724bf1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5c202634-18e2-4348-a013-259834724bf1] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003639737s
addons_test.go:633: (dbg) Run:  kubectl --context addons-006125 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-006125 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-006125 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-006125 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.93s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 12.732965ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-s95j5" [0e371bb5-f973-4496-b0af-810240c01f88] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008297273s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-m54xt" [c65f40cc-4e12-46bd-a8c7-12d30baa522c] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004677851s
addons_test.go:331: (dbg) Run:  kubectl --context addons-006125 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-006125 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-006125 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.151884511s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 ip
2024/12/09 23:19:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dbqrf" [b38cf22e-a387-4ebc-aac6-5cdc528037e8] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004863478s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-006125 addons disable inspektor-gadget --alsologtostderr -v=1: (5.821820444s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 23:20:24.441728  297827 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 23:20:24.455741  297827 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 23:20:24.455776  297827 kapi.go:107] duration metric: took 14.065935ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 14.07734ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-006125 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-006125 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [251637cb-77d0-45a2-8c0e-c8d07e98d808] Pending
helpers_test.go:344: "task-pv-pod" [251637cb-77d0-45a2-8c0e-c8d07e98d808] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [251637cb-77d0-45a2-8c0e-c8d07e98d808] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003383538s
addons_test.go:511: (dbg) Run:  kubectl --context addons-006125 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-006125 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-006125 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-006125 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-006125 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-006125 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-006125 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bd31eaa4-2222-463a-aab5-acd0ed4445e2] Pending
helpers_test.go:344: "task-pv-pod-restore" [bd31eaa4-2222-463a-aab5-acd0ed4445e2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bd31eaa4-2222-463a-aab5-acd0ed4445e2] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004802741s
addons_test.go:553: (dbg) Run:  kubectl --context addons-006125 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-006125 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-006125 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-006125 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.823938511s)
--- PASS: TestAddons/parallel/CSI (51.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-006125 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-006125 --alsologtostderr -v=1: (1.026721266s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-rr555" [dd4ff0d4-715a-4b65-a525-a9a1271dbb68] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-rr555" [dd4ff0d4-715a-4b65-a525-a9a1271dbb68] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003344985s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-006125 addons disable headlamp --alsologtostderr -v=1: (5.753143829s)
--- PASS: TestAddons/parallel/Headlamp (17.78s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-phrjf" [ea9392ca-8d15-41c4-bbf7-853f6aa87290] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003653557s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-006125 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-006125 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006125 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [12887975-cee1-46b5-ad1a-a4060791e48b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [12887975-cee1-46b5-ad1a-a4060791e48b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [12887975-cee1-46b5-ad1a-a4060791e48b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004947205s
addons_test.go:906: (dbg) Run:  kubectl --context addons-006125 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 ssh "cat /opt/local-path-provisioner/pvc-2e1b855f-45ef-4582-80d6-f5a3741f0811_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-006125 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-006125 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-006125 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.223270431s)
--- PASS: TestAddons/parallel/LocalPath (53.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nqsf9" [ae3a9e66-1569-459a-8a4c-25e166bd28a9] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003677695s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.86s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2lfxn" [32b4bcdd-a89e-4066-b4dd-b4fb387dc16c] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.008911548s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-006125 addons disable yakd --alsologtostderr -v=1: (5.788495066s)
--- PASS: TestAddons/parallel/Yakd (10.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-006125
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-006125: (11.869597722s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006125
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006125
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-006125
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (40.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-218566 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-218566 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.184575087s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-218566 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-218566 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-218566 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-218566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-218566
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-218566: (2.007603188s)
--- PASS: TestCertOptions (40.89s)

                                                
                                    
x
+
TestCertExpiration (252.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-681457 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1210 00:04:22.098206  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-681457 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.350072747s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-681457 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-681457 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (27.126269731s)
helpers_test.go:175: Cleaning up "cert-expiration-681457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-681457
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-681457: (2.560575293s)
--- PASS: TestCertExpiration (252.04s)

                                                
                                    
x
+
TestForceSystemdFlag (43.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-192186 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-192186 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.430143959s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-192186 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-192186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-192186
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-192186: (2.428649463s)
--- PASS: TestForceSystemdFlag (43.18s)

                                                
                                    
x
+
TestForceSystemdEnv (37.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-085786 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-085786 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.995934988s)
helpers_test.go:175: Cleaning up "force-systemd-env-085786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-085786
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-085786: (2.775900156s)
--- PASS: TestForceSystemdEnv (37.77s)

                                                
                                    
x
+
TestErrorSpam/setup (29.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-549821 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-549821 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-549821 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-549821 --driver=docker  --container-runtime=crio: (29.938250807s)
--- PASS: TestErrorSpam/setup (29.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 stop: (1.264325583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-549821 --log_dir /tmp/nospam-549821 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19888-292449/.minikube/files/etc/test/nested/copy/297827/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-648515 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-648515 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (50.066189925s)
--- PASS: TestFunctional/serial/StartWithProxy (50.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (23.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 23:27:33.126303  297827 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-648515 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-648515 --alsologtostderr -v=8: (23.468443902s)
functional_test.go:663: soft start took 23.475147552s for "functional-648515" cluster.
I1209 23:27:56.595090  297827 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (23.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-648515 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-648515 cache add registry.k8s.io/pause:3.1: (1.576648485s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-648515 cache add registry.k8s.io/pause:3.3: (1.539459003s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-648515 cache add registry.k8s.io/pause:latest: (1.595942975s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-648515 /tmp/TestFunctionalserialCacheCmdcacheadd_local4199909672/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 cache add minikube-local-cache-test:functional-648515
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 cache delete minikube-local-cache-test:functional-648515
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-648515
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-648515 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.831413ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-648515 cache reload: (1.230297959s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 kubectl -- --context functional-648515 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-648515 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-648515 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-648515 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.292402715s)
functional_test.go:761: restart took 1m2.292525489s for "functional-648515" cluster.
I1209 23:29:08.261357  297827 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (62.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-648515 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-648515 logs: (1.768857267s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 logs --file /tmp/TestFunctionalserialLogsFileCmd4280121900/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-648515 logs --file /tmp/TestFunctionalserialLogsFileCmd4280121900/001/logs.txt: (1.778660317s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-648515 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-648515
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-648515: exit status 115 (725.701957ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30824 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-648515 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-648515 config get cpus: exit status 14 (109.370889ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-648515 config get cpus: exit status 14 (85.378367ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-648515 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-648515 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 326989: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-648515 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-648515 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (234.21864ms)

                                                
                                                
-- stdout --
	* [functional-648515] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:29:54.498570  326413 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:29:54.498783  326413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:29:54.498795  326413 out.go:358] Setting ErrFile to fd 2...
	I1209 23:29:54.498802  326413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:29:54.499121  326413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1209 23:29:54.499558  326413 out.go:352] Setting JSON to false
	I1209 23:29:54.500577  326413 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7936,"bootTime":1733779059,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 23:29:54.500662  326413 start.go:139] virtualization:  
	I1209 23:29:54.503164  326413 out.go:177] * [functional-648515] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1209 23:29:54.505713  326413 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:29:54.505821  326413 notify.go:220] Checking for updates...
	I1209 23:29:54.512021  326413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:29:54.514351  326413 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	I1209 23:29:54.516272  326413 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	I1209 23:29:54.518128  326413 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 23:29:54.520030  326413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:29:54.522362  326413 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:29:54.522876  326413 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:29:54.559357  326413 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:29:54.559485  326413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:29:54.631909  326413 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 23:29:54.617818734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:29:54.632022  326413 docker.go:318] overlay module found
	I1209 23:29:54.636343  326413 out.go:177] * Using the docker driver based on existing profile
	I1209 23:29:54.638167  326413 start.go:297] selected driver: docker
	I1209 23:29:54.638189  326413 start.go:901] validating driver "docker" against &{Name:functional-648515 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-648515 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:29:54.638304  326413 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:29:54.641390  326413 out.go:201] 
	W1209 23:29:54.646026  326413 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 23:29:54.648146  326413 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-648515 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-648515 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-648515 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (263.728698ms)

                                                
                                                
-- stdout --
	* [functional-648515] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:29:54.223999  326335 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:29:54.224233  326335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:29:54.224261  326335 out.go:358] Setting ErrFile to fd 2...
	I1209 23:29:54.224283  326335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:29:54.224664  326335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1209 23:29:54.225109  326335 out.go:352] Setting JSON to false
	I1209 23:29:54.226202  326335 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7936,"bootTime":1733779059,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1209 23:29:54.226310  326335 start.go:139] virtualization:  
	I1209 23:29:54.229425  326335 out.go:177] * [functional-648515] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1209 23:29:54.231719  326335 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:29:54.232554  326335 notify.go:220] Checking for updates...
	I1209 23:29:54.238410  326335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:29:54.243295  326335 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	I1209 23:29:54.249017  326335 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	I1209 23:29:54.251024  326335 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 23:29:54.253218  326335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:29:54.255848  326335 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:29:54.256377  326335 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:29:54.296778  326335 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:29:54.296886  326335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:29:54.404022  326335 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 23:29:54.393413901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:29:54.404140  326335 docker.go:318] overlay module found
	I1209 23:29:54.406329  326335 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1209 23:29:54.408304  326335 start.go:297] selected driver: docker
	I1209 23:29:54.408328  326335 start.go:901] validating driver "docker" against &{Name:functional-648515 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-648515 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:29:54.408440  326335 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:29:54.411027  326335 out.go:201] 
	W1209 23:29:54.413059  326335 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 23:29:54.414888  326335 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-648515 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-648515 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-2m7kh" [403d6688-f8a0-481c-8d9b-393c19c8515e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-2m7kh" [403d6688-f8a0-481c-8d9b-393c19c8515e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.006871826s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31618
functional_test.go:1675: http://192.168.49.2:31618: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-2m7kh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31618
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [65dfceec-8423-4a55-803f-5d6b7ac77d63] Running
E1209 23:29:22.098857  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:29:22.105328  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:29:22.116704  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:29:22.138168  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:29:22.179623  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:29:22.261532  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:29:22.423545  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:29:22.745132  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003865644s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-648515 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-648515 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-648515 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-648515 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d72d17e8-e4a5-4e4f-bc4b-c970f4f3102c] Pending
helpers_test.go:344: "sp-pod" [d72d17e8-e4a5-4e4f-bc4b-c970f4f3102c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1209 23:29:27.230478  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [d72d17e8-e4a5-4e4f-bc4b-c970f4f3102c] Running
E1209 23:29:32.351841  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004364793s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-648515 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-648515 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-648515 delete -f testdata/storage-provisioner/pod.yaml: (1.020989342s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-648515 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [08ea5b1c-1840-4ac7-9d6a-b7485f2d9704] Pending
helpers_test.go:344: "sp-pod" [08ea5b1c-1840-4ac7-9d6a-b7485f2d9704] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005082957s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-648515 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh -n functional-648515 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 cp functional-648515:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd219199651/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh -n functional-648515 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh -n functional-648515 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/297827/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo cat /etc/test/nested/copy/297827/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/297827.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo cat /etc/ssl/certs/297827.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/297827.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo cat /usr/share/ca-certificates/297827.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2978272.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo cat /etc/ssl/certs/2978272.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2978272.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo cat /usr/share/ca-certificates/2978272.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-648515 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-648515 ssh "sudo systemctl is-active docker": exit status 1 (347.878447ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-648515 ssh "sudo systemctl is-active containerd": exit status 1 (279.603383ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-648515 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-648515 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-648515 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 324116: os: process already finished
helpers_test.go:502: unable to terminate pid 323913: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-648515 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-648515 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-648515 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ab4183b1-e394-4e87-a3da-d6d2a327178f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ab4183b1-e394-4e87-a3da-d6d2a327178f] Running
E1209 23:29:23.387077  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:29:24.668499  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.005638294s
I1209 23:29:28.212585  297827 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-648515 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.169.80 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-648515 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-648515 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-648515 expose deployment hello-node --type=NodePort --port=8080
E1209 23:29:42.593136  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-qssr2" [016edcb0-bd6b-4ac5-9bd2-adf0e2ab0b77] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-qssr2" [016edcb0-bd6b-4ac5-9bd2-adf0e2ab0b77] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003947195s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "365.878762ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "64.910382ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "351.214183ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "63.519539ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdany-port3257181162/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733786986684953171" to /tmp/TestFunctionalparallelMountCmdany-port3257181162/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733786986684953171" to /tmp/TestFunctionalparallelMountCmdany-port3257181162/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733786986684953171" to /tmp/TestFunctionalparallelMountCmdany-port3257181162/001/test-1733786986684953171
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-648515 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (389.950947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:29:47.075180  297827 retry.go:31] will retry after 267.513848ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 23:29 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 23:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 23:29 test-1733786986684953171
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh cat /mount-9p/test-1733786986684953171
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-648515 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [afc515cd-c9fe-4dee-9a00-fa0e1f236263] Pending
helpers_test.go:344: "busybox-mount" [afc515cd-c9fe-4dee-9a00-fa0e1f236263] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [afc515cd-c9fe-4dee-9a00-fa0e1f236263] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [afc515cd-c9fe-4dee-9a00-fa0e1f236263] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004883946s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-648515 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdany-port3257181162/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 service list -o json
functional_test.go:1494: Took "531.931236ms" to run "out/minikube-linux-arm64 -p functional-648515 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31687
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31687
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdspecific-port1721295121/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-648515 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (531.927912ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:29:55.296782  297827 retry.go:31] will retry after 624.229069ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdspecific-port1721295121/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-648515 ssh "sudo umount -f /mount-9p": exit status 1 (478.475008ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-648515 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdspecific-port1721295121/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup230113885/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup230113885/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup230113885/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-648515 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup230113885/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup230113885/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-648515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup230113885/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-648515 version -o=json --components: (1.323289087s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-648515 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-648515
localhost/kicbase/echo-server:functional-648515
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-648515 image ls --format short --alsologtostderr:
I1209 23:30:11.899678  328932 out.go:345] Setting OutFile to fd 1 ...
I1209 23:30:11.899832  328932 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:11.899856  328932 out.go:358] Setting ErrFile to fd 2...
I1209 23:30:11.899868  328932 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:11.900197  328932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
I1209 23:30:11.901146  328932 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:11.901375  328932 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:11.902002  328932 cli_runner.go:164] Run: docker container inspect functional-648515 --format={{.State.Status}}
I1209 23:30:11.926671  328932 ssh_runner.go:195] Run: systemctl --version
I1209 23:30:11.926735  328932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648515
I1209 23:30:11.967588  328932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/functional-648515/id_rsa Username:docker}
I1209 23:30:12.060572  328932 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-648515 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-648515  | 33b0d5d4da287 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 9404aea098d9e | 87MB   |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| docker.io/library/nginx                 | latest             | bdf62fd3a32f1 | 201MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | d6b061e73ae45 | 67MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/kicbase/echo-server           | functional-648515  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | f9c26480f1e72 | 92.6MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 021d242013305 | 96MB   |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 0bcd66b03df5f | 98.3MB |
| docker.io/library/nginx                 | alpine             | dba92e6b64886 | 58.3MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-648515 image ls --format table --alsologtostderr:
I1209 23:30:12.787389  329180 out.go:345] Setting OutFile to fd 1 ...
I1209 23:30:12.787618  329180 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:12.787652  329180 out.go:358] Setting ErrFile to fd 2...
I1209 23:30:12.787674  329180 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:12.787959  329180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
I1209 23:30:12.788745  329180 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:12.788919  329180 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:12.789602  329180 cli_runner.go:164] Run: docker container inspect functional-648515 --format={{.State.Status}}
I1209 23:30:12.819034  329180 ssh_runner.go:195] Run: systemctl --version
I1209 23:30:12.819091  329180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648515
I1209 23:30:12.842809  329180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/functional-648515/id_rsa Username:docker}
I1209 23:30:12.931963  329180 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-648515 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-648515"],"size":"4788229"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"d6b061e73ae454743cbfe0e
3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67007814"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe","registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"95952789"},{"id":"0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","re
poDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"98291250"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:eff2df9ac0ef6c949886d040dc2037ee6576d76161249261982fb70458ae8c26"],"repoTags":["docker.io/library/nginx:alpine"],"size":"58293755"},{"id":"2f6c962e7b8311337352d9fdea917da2184
d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8e7caee5c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63","registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"92632544"},{"id":"9404aea098d9e80cb648d86c07d56130a1fe875ed7c252625
1c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752","registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"86996294"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io
/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":["docker.io/library/nginx@sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"201166247"},{"id":"33b0d5d4da287b3c9c611e6a560a22f10999fcc00a950
5fd47e49e07297addb4","repoDigests":["localhost/minikube-local-cache-test@sha256:aad230b0d575e70f0488e83f29851d465d1b4f05eefaf63b5fcb7e256c13a3cf"],"repoTags":["localhost/minikube-local-cache-test:functional-648515"],"size":"3330"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-648515 image ls --format json --alsologtostderr:
I1209 23:30:12.521172  329094 out.go:345] Setting OutFile to fd 1 ...
I1209 23:30:12.521588  329094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:12.521598  329094 out.go:358] Setting ErrFile to fd 2...
I1209 23:30:12.521604  329094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:12.521907  329094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
I1209 23:30:12.522583  329094 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:12.522696  329094 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:12.523205  329094 cli_runner.go:164] Run: docker container inspect functional-648515 --format={{.State.Status}}
I1209 23:30:12.542284  329094 ssh_runner.go:195] Run: systemctl --version
I1209 23:30:12.542340  329094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648515
I1209 23:30:12.560668  329094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/functional-648515/id_rsa Username:docker}
I1209 23:30:12.652802  329094 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-648515 image ls --format yaml --alsologtostderr:
- id: 0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "98291250"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 33b0d5d4da287b3c9c611e6a560a22f10999fcc00a9505fd47e49e07297addb4
repoDigests:
- localhost/minikube-local-cache-test@sha256:aad230b0d575e70f0488e83f29851d465d1b4f05eefaf63b5fcb7e256c13a3cf
repoTags:
- localhost/minikube-local-cache-test:functional-648515
size: "3330"
- id: 021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
- registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "95952789"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-648515
size: "4788229"
- id: f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8e7caee5c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "92632544"
- id: d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67007814"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:eff2df9ac0ef6c949886d040dc2037ee6576d76161249261982fb70458ae8c26
repoTags:
- docker.io/library/nginx:alpine
size: "58293755"
- id: bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests:
- docker.io/library/nginx@sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "201166247"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
- registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "86996294"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-648515 image ls --format yaml --alsologtostderr:
I1209 23:30:12.215745  329019 out.go:345] Setting OutFile to fd 1 ...
I1209 23:30:12.215955  329019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:12.215981  329019 out.go:358] Setting ErrFile to fd 2...
I1209 23:30:12.216001  329019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:12.216292  329019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
I1209 23:30:12.217011  329019 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:12.217209  329019 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:12.217739  329019 cli_runner.go:164] Run: docker container inspect functional-648515 --format={{.State.Status}}
I1209 23:30:12.238111  329019 ssh_runner.go:195] Run: systemctl --version
I1209 23:30:12.238167  329019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648515
I1209 23:30:12.261025  329019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/functional-648515/id_rsa Username:docker}
I1209 23:30:12.352331  329019 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-648515 ssh pgrep buildkitd: exit status 1 (353.111278ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image build -t localhost/my-image:functional-648515 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-648515 image build -t localhost/my-image:functional-648515 testdata/build --alsologtostderr: (3.395468239s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-648515 image build -t localhost/my-image:functional-648515 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 82653504f3a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-648515
--> f825fac0c56
Successfully tagged localhost/my-image:functional-648515
f825fac0c560646ea4e1f432dc1063b3a6e29c3931e616460f5a5dba126a03dc
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-648515 image build -t localhost/my-image:functional-648515 testdata/build --alsologtostderr:
I1209 23:30:12.431496  329083 out.go:345] Setting OutFile to fd 1 ...
I1209 23:30:12.432066  329083 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:12.432075  329083 out.go:358] Setting ErrFile to fd 2...
I1209 23:30:12.432081  329083 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:30:12.432690  329083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
I1209 23:30:12.433669  329083 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:12.435142  329083 config.go:182] Loaded profile config "functional-648515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:30:12.435871  329083 cli_runner.go:164] Run: docker container inspect functional-648515 --format={{.State.Status}}
I1209 23:30:12.459740  329083 ssh_runner.go:195] Run: systemctl --version
I1209 23:30:12.459795  329083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648515
I1209 23:30:12.484536  329083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/functional-648515/id_rsa Username:docker}
I1209 23:30:12.576738  329083 build_images.go:161] Building image from path: /tmp/build.969737192.tar
I1209 23:30:12.576825  329083 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 23:30:12.590303  329083 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.969737192.tar
I1209 23:30:12.597988  329083 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.969737192.tar: stat -c "%s %y" /var/lib/minikube/build/build.969737192.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.969737192.tar': No such file or directory
I1209 23:30:12.598026  329083 ssh_runner.go:362] scp /tmp/build.969737192.tar --> /var/lib/minikube/build/build.969737192.tar (3072 bytes)
I1209 23:30:12.627686  329083 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.969737192
I1209 23:30:12.637001  329083 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.969737192 -xf /var/lib/minikube/build/build.969737192.tar
I1209 23:30:12.646897  329083 crio.go:315] Building image: /var/lib/minikube/build/build.969737192
I1209 23:30:12.647004  329083 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-648515 /var/lib/minikube/build/build.969737192 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1209 23:30:15.730637  329083 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-648515 /var/lib/minikube/build/build.969737192 --cgroup-manager=cgroupfs: (3.083600684s)
I1209 23:30:15.730725  329083 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.969737192
I1209 23:30:15.740131  329083 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.969737192.tar
I1209 23:30:15.749281  329083 build_images.go:217] Built localhost/my-image:functional-648515 from /tmp/build.969737192.tar
I1209 23:30:15.749312  329083 build_images.go:133] succeeded building to: functional-648515
I1209 23:30:15.749318  329083 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.475821539s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-648515
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image load --daemon kicbase/echo-server:functional-648515 --alsologtostderr
E1209 23:30:03.076795  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-648515 image load --daemon kicbase/echo-server:functional-648515 --alsologtostderr: (3.312123787s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image load --daemon kicbase/echo-server:functional-648515 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-648515
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image load --daemon kicbase/echo-server:functional-648515 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image save kicbase/echo-server:functional-648515 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image rm kicbase/echo-server:functional-648515 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image ls
2024/12/09 23:30:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-648515
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 image save --daemon kicbase/echo-server:functional-648515 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-648515
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-648515 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-648515
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-648515
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-648515
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (179.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-979427 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1209 23:30:44.038107  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:32:05.961040  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-979427 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m58.927323728s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (179.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-979427 -- rollout status deployment/busybox: (5.822795098s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-j9g2d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-n4pdc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-vprsz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-j9g2d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-n4pdc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-vprsz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-j9g2d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-n4pdc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-vprsz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-j9g2d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-j9g2d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-n4pdc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-n4pdc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-vprsz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-979427 -- exec busybox-7dff88458-vprsz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (38.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-979427 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-979427 -v=7 --alsologtostderr: (37.17147036s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (38.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-979427 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-979427 status --output json -v=7 --alsologtostderr: (1.022476874s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp testdata/cp-test.txt ha-979427:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile994185342/001/cp-test_ha-979427.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427:/home/docker/cp-test.txt ha-979427-m02:/home/docker/cp-test_ha-979427_ha-979427-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m02 "sudo cat /home/docker/cp-test_ha-979427_ha-979427-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427:/home/docker/cp-test.txt ha-979427-m03:/home/docker/cp-test_ha-979427_ha-979427-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m03 "sudo cat /home/docker/cp-test_ha-979427_ha-979427-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427:/home/docker/cp-test.txt ha-979427-m04:/home/docker/cp-test_ha-979427_ha-979427-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m04 "sudo cat /home/docker/cp-test_ha-979427_ha-979427-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp testdata/cp-test.txt ha-979427-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile994185342/001/cp-test_ha-979427-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m02:/home/docker/cp-test.txt ha-979427:/home/docker/cp-test_ha-979427-m02_ha-979427.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427 "sudo cat /home/docker/cp-test_ha-979427-m02_ha-979427.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m02:/home/docker/cp-test.txt ha-979427-m03:/home/docker/cp-test_ha-979427-m02_ha-979427-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m03 "sudo cat /home/docker/cp-test_ha-979427-m02_ha-979427-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m02:/home/docker/cp-test.txt ha-979427-m04:/home/docker/cp-test_ha-979427-m02_ha-979427-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m04 "sudo cat /home/docker/cp-test_ha-979427-m02_ha-979427-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp testdata/cp-test.txt ha-979427-m03:/home/docker/cp-test.txt
E1209 23:34:18.722649  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:34:18.729043  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:34:18.740405  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:34:18.762133  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:34:18.804731  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:34:18.886335  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m03 "sudo cat /home/docker/cp-test.txt"
E1209 23:34:19.048084  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile994185342/001/cp-test_ha-979427-m03.txt
E1209 23:34:19.370798  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m03:/home/docker/cp-test.txt ha-979427:/home/docker/cp-test_ha-979427-m03_ha-979427.txt
E1209 23:34:20.015813  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427 "sudo cat /home/docker/cp-test_ha-979427-m03_ha-979427.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m03:/home/docker/cp-test.txt ha-979427-m02:/home/docker/cp-test_ha-979427-m03_ha-979427-m02.txt
E1209 23:34:21.310743  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m02 "sudo cat /home/docker/cp-test_ha-979427-m03_ha-979427-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m03:/home/docker/cp-test.txt ha-979427-m04:/home/docker/cp-test_ha-979427-m03_ha-979427-m04.txt
E1209 23:34:22.098184  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m04 "sudo cat /home/docker/cp-test_ha-979427-m03_ha-979427-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp testdata/cp-test.txt ha-979427-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile994185342/001/cp-test_ha-979427-m04.txt
E1209 23:34:23.872719  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m04:/home/docker/cp-test.txt ha-979427:/home/docker/cp-test_ha-979427-m04_ha-979427.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427 "sudo cat /home/docker/cp-test_ha-979427-m04_ha-979427.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m04:/home/docker/cp-test.txt ha-979427-m02:/home/docker/cp-test_ha-979427-m04_ha-979427-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m02 "sudo cat /home/docker/cp-test_ha-979427-m04_ha-979427-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 cp ha-979427-m04:/home/docker/cp-test.txt ha-979427-m03:/home/docker/cp-test_ha-979427-m04_ha-979427-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 ssh -n ha-979427-m03 "sudo cat /home/docker/cp-test_ha-979427-m04_ha-979427-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 node stop m02 -v=7 --alsologtostderr
E1209 23:34:28.994203  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:34:39.235539  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-979427 node stop m02 -v=7 --alsologtostderr: (11.964067468s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr: exit status 7 (835.117612ms)

                                                
                                                
-- stdout --
	ha-979427
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-979427-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-979427-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-979427-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:34:39.748868  344970 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:34:39.749079  344970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:34:39.749093  344970 out.go:358] Setting ErrFile to fd 2...
	I1209 23:34:39.749099  344970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:34:39.749476  344970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1209 23:34:39.749725  344970 out.go:352] Setting JSON to false
	I1209 23:34:39.749781  344970 mustload.go:65] Loading cluster: ha-979427
	I1209 23:34:39.749863  344970 notify.go:220] Checking for updates...
	I1209 23:34:39.750871  344970 config.go:182] Loaded profile config "ha-979427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:34:39.750900  344970 status.go:174] checking status of ha-979427 ...
	I1209 23:34:39.751539  344970 cli_runner.go:164] Run: docker container inspect ha-979427 --format={{.State.Status}}
	I1209 23:34:39.772098  344970 status.go:371] ha-979427 host status = "Running" (err=<nil>)
	I1209 23:34:39.772132  344970 host.go:66] Checking if "ha-979427" exists ...
	I1209 23:34:39.772459  344970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-979427
	I1209 23:34:39.800764  344970 host.go:66] Checking if "ha-979427" exists ...
	I1209 23:34:39.801057  344970 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:34:39.801146  344970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-979427
	I1209 23:34:39.819588  344970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/ha-979427/id_rsa Username:docker}
	I1209 23:34:39.908757  344970 ssh_runner.go:195] Run: systemctl --version
	I1209 23:34:39.913232  344970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:34:39.927330  344970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:34:39.998889  344970 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-12-09 23:34:39.987923269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:34:40.009629  344970 kubeconfig.go:125] found "ha-979427" server: "https://192.168.49.254:8443"
	I1209 23:34:40.009696  344970 api_server.go:166] Checking apiserver status ...
	I1209 23:34:40.009758  344970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:34:40.036522  344970 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1402/cgroup
	I1209 23:34:40.054709  344970 api_server.go:182] apiserver freezer: "13:freezer:/docker/d9f5c289c22de9d35f4031c56b12b88ae934ccddd9a6ec2e0324a58a80b5095e/crio/crio-241c096fff4b7d3c969449994e10f72370b7429bea47cd80c6abbb0fc06b62a4"
	I1209 23:34:40.054783  344970 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d9f5c289c22de9d35f4031c56b12b88ae934ccddd9a6ec2e0324a58a80b5095e/crio/crio-241c096fff4b7d3c969449994e10f72370b7429bea47cd80c6abbb0fc06b62a4/freezer.state
	I1209 23:34:40.091756  344970 api_server.go:204] freezer state: "THAWED"
	I1209 23:34:40.091787  344970 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 23:34:40.100611  344970 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 23:34:40.100671  344970 status.go:463] ha-979427 apiserver status = Running (err=<nil>)
	I1209 23:34:40.100749  344970 status.go:176] ha-979427 status: &{Name:ha-979427 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:34:40.100826  344970 status.go:174] checking status of ha-979427-m02 ...
	I1209 23:34:40.101380  344970 cli_runner.go:164] Run: docker container inspect ha-979427-m02 --format={{.State.Status}}
	I1209 23:34:40.121345  344970 status.go:371] ha-979427-m02 host status = "Stopped" (err=<nil>)
	I1209 23:34:40.121375  344970 status.go:384] host is not running, skipping remaining checks
	I1209 23:34:40.121383  344970 status.go:176] ha-979427-m02 status: &{Name:ha-979427-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:34:40.121405  344970 status.go:174] checking status of ha-979427-m03 ...
	I1209 23:34:40.121860  344970 cli_runner.go:164] Run: docker container inspect ha-979427-m03 --format={{.State.Status}}
	I1209 23:34:40.143343  344970 status.go:371] ha-979427-m03 host status = "Running" (err=<nil>)
	I1209 23:34:40.143374  344970 host.go:66] Checking if "ha-979427-m03" exists ...
	I1209 23:34:40.143718  344970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-979427-m03
	I1209 23:34:40.167160  344970 host.go:66] Checking if "ha-979427-m03" exists ...
	I1209 23:34:40.167498  344970 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:34:40.167556  344970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-979427-m03
	I1209 23:34:40.187840  344970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/ha-979427-m03/id_rsa Username:docker}
	I1209 23:34:40.293142  344970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:34:40.306819  344970 kubeconfig.go:125] found "ha-979427" server: "https://192.168.49.254:8443"
	I1209 23:34:40.306857  344970 api_server.go:166] Checking apiserver status ...
	I1209 23:34:40.306904  344970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:34:40.320400  344970 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1316/cgroup
	I1209 23:34:40.331370  344970 api_server.go:182] apiserver freezer: "13:freezer:/docker/ddf10744753b109626f1659d6dfa9f2af627b1335ee71ded950e6dd4cc5dfff3/crio/crio-7007a40d2eaccba505c13ea330e2bc15950aa68191939bd7177b01c7c874fed8"
	I1209 23:34:40.331480  344970 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ddf10744753b109626f1659d6dfa9f2af627b1335ee71ded950e6dd4cc5dfff3/crio/crio-7007a40d2eaccba505c13ea330e2bc15950aa68191939bd7177b01c7c874fed8/freezer.state
	I1209 23:34:40.342490  344970 api_server.go:204] freezer state: "THAWED"
	I1209 23:34:40.342521  344970 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 23:34:40.350542  344970 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 23:34:40.350583  344970 status.go:463] ha-979427-m03 apiserver status = Running (err=<nil>)
	I1209 23:34:40.350593  344970 status.go:176] ha-979427-m03 status: &{Name:ha-979427-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:34:40.350615  344970 status.go:174] checking status of ha-979427-m04 ...
	I1209 23:34:40.350949  344970 cli_runner.go:164] Run: docker container inspect ha-979427-m04 --format={{.State.Status}}
	I1209 23:34:40.369848  344970 status.go:371] ha-979427-m04 host status = "Running" (err=<nil>)
	I1209 23:34:40.369877  344970 host.go:66] Checking if "ha-979427-m04" exists ...
	I1209 23:34:40.370179  344970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-979427-m04
	I1209 23:34:40.389886  344970 host.go:66] Checking if "ha-979427-m04" exists ...
	I1209 23:34:40.390202  344970 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:34:40.390258  344970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-979427-m04
	I1209 23:34:40.414129  344970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/ha-979427-m04/id_rsa Username:docker}
	I1209 23:34:40.504432  344970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:34:40.519714  344970 status.go:176] ha-979427-m04 status: &{Name:ha-979427-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 node start m02 -v=7 --alsologtostderr
E1209 23:34:49.803241  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:34:59.717553  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-979427 node start m02 -v=7 --alsologtostderr: (21.616506768s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr: (1.642030406s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.435180913s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (200.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-979427 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-979427 -v=7 --alsologtostderr
E1209 23:35:40.678835  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-979427 -v=7 --alsologtostderr: (37.240824611s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-979427 --wait=true -v=7 --alsologtostderr
E1209 23:37:02.601029  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-979427 --wait=true -v=7 --alsologtostderr: (2m43.146121245s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-979427
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (200.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-979427 node delete m03 -v=7 --alsologtostderr: (11.679352734s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-979427 stop -v=7 --alsologtostderr: (35.60852441s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr: exit status 7 (121.066346ms)

                                                
                                                
-- stdout --
	ha-979427
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-979427-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-979427-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:39:15.883596  359537 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:39:15.883768  359537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:39:15.883807  359537 out.go:358] Setting ErrFile to fd 2...
	I1209 23:39:15.883820  359537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:39:15.884094  359537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1209 23:39:15.884334  359537 out.go:352] Setting JSON to false
	I1209 23:39:15.884374  359537 mustload.go:65] Loading cluster: ha-979427
	I1209 23:39:15.884484  359537 notify.go:220] Checking for updates...
	I1209 23:39:15.884835  359537 config.go:182] Loaded profile config "ha-979427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:39:15.884853  359537 status.go:174] checking status of ha-979427 ...
	I1209 23:39:15.885709  359537 cli_runner.go:164] Run: docker container inspect ha-979427 --format={{.State.Status}}
	I1209 23:39:15.904456  359537 status.go:371] ha-979427 host status = "Stopped" (err=<nil>)
	I1209 23:39:15.904481  359537 status.go:384] host is not running, skipping remaining checks
	I1209 23:39:15.904488  359537 status.go:176] ha-979427 status: &{Name:ha-979427 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:39:15.904520  359537 status.go:174] checking status of ha-979427-m02 ...
	I1209 23:39:15.904813  359537 cli_runner.go:164] Run: docker container inspect ha-979427-m02 --format={{.State.Status}}
	I1209 23:39:15.928986  359537 status.go:371] ha-979427-m02 host status = "Stopped" (err=<nil>)
	I1209 23:39:15.929011  359537 status.go:384] host is not running, skipping remaining checks
	I1209 23:39:15.929018  359537 status.go:176] ha-979427-m02 status: &{Name:ha-979427-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:39:15.929037  359537 status.go:174] checking status of ha-979427-m04 ...
	I1209 23:39:15.929355  359537 cli_runner.go:164] Run: docker container inspect ha-979427-m04 --format={{.State.Status}}
	I1209 23:39:15.946415  359537 status.go:371] ha-979427-m04 host status = "Stopped" (err=<nil>)
	I1209 23:39:15.946449  359537 status.go:384] host is not running, skipping remaining checks
	I1209 23:39:15.946456  359537 status.go:176] ha-979427-m04 status: &{Name:ha-979427-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-979427 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1209 23:39:18.722525  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:39:22.097646  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:39:46.442841  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-979427 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m39.972524683s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-979427 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-979427 --control-plane -v=7 --alsologtostderr: (1m14.204516882s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-979427 status -v=7 --alsologtostderr: (1.015432387s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-085349 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-085349 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (49.978387288s)
--- PASS: TestJSONOutput/start/Command (49.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-085349 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-085349 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-085349 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-085349 --output=json --user=testUser: (5.843011604s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-114083 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-114083 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.983607ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"263d7286-3291-4b68-8a71-951ed7f6a999","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-114083] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"984e95f8-6387-4e25-8416-89896698140f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19888"}}
	{"specversion":"1.0","id":"bfc244c0-e892-4807-9239-c168c0900b30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ddc694dd-6530-4a8c-ab03-56dee2e62263","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig"}}
	{"specversion":"1.0","id":"9222107e-ceca-4e5a-aa52-c89b5644aff8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube"}}
	{"specversion":"1.0","id":"6dcebbca-d214-4301-be4f-17b4b0d6b40d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"47d2f440-faf6-426f-b847-763bde67db75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c115beeb-6bd9-45e9-9f77-d5c9462ebc6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-114083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-114083
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-640173 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-640173 --network=: (35.732411598s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-640173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-640173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-640173: (2.111289013s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.87s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-017536 --network=bridge
E1209 23:44:18.722344  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:44:22.098516  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-017536 --network=bridge: (28.330314954s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-017536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-017536
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-017536: (2.090206674s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.45s)

                                                
                                    
x
+
TestKicExistingNetwork (36.44s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1209 23:44:32.295586  297827 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1209 23:44:32.310956  297827 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1209 23:44:32.311042  297827 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1209 23:44:32.311060  297827 cli_runner.go:164] Run: docker network inspect existing-network
W1209 23:44:32.327598  297827 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1209 23:44:32.327628  297827 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1209 23:44:32.327646  297827 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1209 23:44:32.327746  297827 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 23:44:32.346148  297827 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ca05029b7795 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c4:a9:b7:d1} reservation:<nil>}
I1209 23:44:32.346578  297827 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001773760}
I1209 23:44:32.346609  297827 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1209 23:44:32.346658  297827 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1209 23:44:32.420318  297827 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-346210 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-346210 --network=existing-network: (34.24978682s)
helpers_test.go:175: Cleaning up "existing-network-346210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-346210
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-346210: (2.020763002s)
I1209 23:45:08.707773  297827 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.44s)

                                                
                                    
x
+
TestKicCustomSubnet (34.65s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-506642 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-506642 --subnet=192.168.60.0/24: (32.579027644s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-506642 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-506642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-506642
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-506642: (2.046953748s)
--- PASS: TestKicCustomSubnet (34.65s)

                                                
                                    
x
+
TestKicStaticIP (32.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-812483 --static-ip=192.168.200.200
E1209 23:45:45.166225  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-812483 --static-ip=192.168.200.200: (30.644882542s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-812483 ip
helpers_test.go:175: Cleaning up "static-ip-812483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-812483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-812483: (2.168627381s)
--- PASS: TestKicStaticIP (32.97s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-722413 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-722413 --driver=docker  --container-runtime=crio: (28.619483992s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-725120 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-725120 --driver=docker  --container-runtime=crio: (32.89518113s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-722413
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-725120
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-725120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-725120
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-725120: (2.060735385s)
helpers_test.go:175: Cleaning up "first-722413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-722413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-722413: (2.357627813s)
--- PASS: TestMinikubeProfile (67.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-858829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-858829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.392454121s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-858829 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-861012 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-861012 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.188914443s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-861012 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-858829 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-858829 --alsologtostderr -v=5: (1.646626596s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-861012 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-861012
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-861012: (1.207723605s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-861012
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-861012: (6.803654247s)
--- PASS: TestMountStart/serial/RestartStopped (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-861012 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-759900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-759900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.440231368s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-759900 -- rollout status deployment/busybox: (4.852911412s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-kthtp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-lm276 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-kthtp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-lm276 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-kthtp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-lm276 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-kthtp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-kthtp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-lm276 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-759900 -- exec busybox-7dff88458-lm276 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-759900 -v 3 --alsologtostderr
E1209 23:49:18.722260  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:49:22.097540  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-759900 -v 3 --alsologtostderr: (27.66464836s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-759900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp testdata/cp-test.txt multinode-759900:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp multinode-759900:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3504012130/001/cp-test_multinode-759900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp multinode-759900:/home/docker/cp-test.txt multinode-759900-m02:/home/docker/cp-test_multinode-759900_multinode-759900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m02 "sudo cat /home/docker/cp-test_multinode-759900_multinode-759900-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp multinode-759900:/home/docker/cp-test.txt multinode-759900-m03:/home/docker/cp-test_multinode-759900_multinode-759900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m03 "sudo cat /home/docker/cp-test_multinode-759900_multinode-759900-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp testdata/cp-test.txt multinode-759900-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp multinode-759900-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3504012130/001/cp-test_multinode-759900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp multinode-759900-m02:/home/docker/cp-test.txt multinode-759900:/home/docker/cp-test_multinode-759900-m02_multinode-759900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900 "sudo cat /home/docker/cp-test_multinode-759900-m02_multinode-759900.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp multinode-759900-m02:/home/docker/cp-test.txt multinode-759900-m03:/home/docker/cp-test_multinode-759900-m02_multinode-759900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m03 "sudo cat /home/docker/cp-test_multinode-759900-m02_multinode-759900-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp testdata/cp-test.txt multinode-759900-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp multinode-759900-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3504012130/001/cp-test_multinode-759900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp multinode-759900-m03:/home/docker/cp-test.txt multinode-759900:/home/docker/cp-test_multinode-759900-m03_multinode-759900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900 "sudo cat /home/docker/cp-test_multinode-759900-m03_multinode-759900.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 cp multinode-759900-m03:/home/docker/cp-test.txt multinode-759900-m02:/home/docker/cp-test_multinode-759900-m03_multinode-759900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 ssh -n multinode-759900-m02 "sudo cat /home/docker/cp-test_multinode-759900-m03_multinode-759900-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-759900 node stop m03: (1.232283714s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-759900 status: exit status 7 (500.491415ms)

                                                
                                                
-- stdout --
	multinode-759900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-759900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-759900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-759900 status --alsologtostderr: exit status 7 (509.497128ms)

                                                
                                                
-- stdout --
	multinode-759900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-759900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-759900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:49:58.737036  412762 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:49:58.737167  412762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:49:58.737173  412762 out.go:358] Setting ErrFile to fd 2...
	I1209 23:49:58.737178  412762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:49:58.737543  412762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1209 23:49:58.737766  412762 out.go:352] Setting JSON to false
	I1209 23:49:58.737785  412762 mustload.go:65] Loading cluster: multinode-759900
	I1209 23:49:58.738455  412762 config.go:182] Loaded profile config "multinode-759900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:49:58.738471  412762 status.go:174] checking status of multinode-759900 ...
	I1209 23:49:58.739715  412762 cli_runner.go:164] Run: docker container inspect multinode-759900 --format={{.State.Status}}
	I1209 23:49:58.741776  412762 notify.go:220] Checking for updates...
	I1209 23:49:58.762169  412762 status.go:371] multinode-759900 host status = "Running" (err=<nil>)
	I1209 23:49:58.762189  412762 host.go:66] Checking if "multinode-759900" exists ...
	I1209 23:49:58.762562  412762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-759900
	I1209 23:49:58.780836  412762 host.go:66] Checking if "multinode-759900" exists ...
	I1209 23:49:58.781129  412762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:49:58.781173  412762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-759900
	I1209 23:49:58.807733  412762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/multinode-759900/id_rsa Username:docker}
	I1209 23:49:58.896594  412762 ssh_runner.go:195] Run: systemctl --version
	I1209 23:49:58.901178  412762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:49:58.913392  412762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:49:58.968913  412762 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-09 23:49:58.95811218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1209 23:49:58.969558  412762 kubeconfig.go:125] found "multinode-759900" server: "https://192.168.67.2:8443"
	I1209 23:49:58.969597  412762 api_server.go:166] Checking apiserver status ...
	I1209 23:49:58.969650  412762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:49:58.981963  412762 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1375/cgroup
	I1209 23:49:58.991331  412762 api_server.go:182] apiserver freezer: "13:freezer:/docker/16c9988652639461e20be899a6638ea9b3117c4bcd1a73e59124797c8044b821/crio/crio-3f90b6961da27ced988cf09e80728c92073233dc5b54ecdc567959ca33d1e8dc"
	I1209 23:49:58.991423  412762 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/16c9988652639461e20be899a6638ea9b3117c4bcd1a73e59124797c8044b821/crio/crio-3f90b6961da27ced988cf09e80728c92073233dc5b54ecdc567959ca33d1e8dc/freezer.state
	I1209 23:49:59.000877  412762 api_server.go:204] freezer state: "THAWED"
	I1209 23:49:59.000909  412762 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1209 23:49:59.012168  412762 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1209 23:49:59.012207  412762 status.go:463] multinode-759900 apiserver status = Running (err=<nil>)
	I1209 23:49:59.012246  412762 status.go:176] multinode-759900 status: &{Name:multinode-759900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:49:59.012269  412762 status.go:174] checking status of multinode-759900-m02 ...
	I1209 23:49:59.012606  412762 cli_runner.go:164] Run: docker container inspect multinode-759900-m02 --format={{.State.Status}}
	I1209 23:49:59.030740  412762 status.go:371] multinode-759900-m02 host status = "Running" (err=<nil>)
	I1209 23:49:59.030764  412762 host.go:66] Checking if "multinode-759900-m02" exists ...
	I1209 23:49:59.031067  412762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-759900-m02
	I1209 23:49:59.049931  412762 host.go:66] Checking if "multinode-759900-m02" exists ...
	I1209 23:49:59.050238  412762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:49:59.050296  412762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-759900-m02
	I1209 23:49:59.067201  412762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19888-292449/.minikube/machines/multinode-759900-m02/id_rsa Username:docker}
	I1209 23:49:59.152739  412762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:49:59.165427  412762 status.go:176] multinode-759900-m02 status: &{Name:multinode-759900-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:49:59.165475  412762 status.go:174] checking status of multinode-759900-m03 ...
	I1209 23:49:59.165775  412762 cli_runner.go:164] Run: docker container inspect multinode-759900-m03 --format={{.State.Status}}
	I1209 23:49:59.183449  412762 status.go:371] multinode-759900-m03 host status = "Stopped" (err=<nil>)
	I1209 23:49:59.183474  412762 status.go:384] host is not running, skipping remaining checks
	I1209 23:49:59.183480  412762 status.go:176] multinode-759900-m03 status: &{Name:multinode-759900-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-759900 node start m03 -v=7 --alsologtostderr: (8.742304494s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (127.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-759900
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-759900
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-759900: (24.866517038s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-759900 --wait=true -v=8 --alsologtostderr
E1209 23:50:41.804554  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-759900 --wait=true -v=8 --alsologtostderr: (1m42.806725693s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-759900
--- PASS: TestMultiNode/serial/RestartKeepsNodes (127.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-759900 node delete m03: (4.849174884s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-759900 stop: (23.605666272s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-759900 status: exit status 7 (95.235803ms)

                                                
                                                
-- stdout --
	multinode-759900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-759900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-759900 status --alsologtostderr: exit status 7 (95.450747ms)

                                                
                                                
-- stdout --
	multinode-759900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-759900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:52:45.781033  420525 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:52:45.781239  420525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:52:45.781271  420525 out.go:358] Setting ErrFile to fd 2...
	I1209 23:52:45.781296  420525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:52:45.781561  420525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1209 23:52:45.781808  420525 out.go:352] Setting JSON to false
	I1209 23:52:45.781868  420525 mustload.go:65] Loading cluster: multinode-759900
	I1209 23:52:45.782006  420525 notify.go:220] Checking for updates...
	I1209 23:52:45.782419  420525 config.go:182] Loaded profile config "multinode-759900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:52:45.782466  420525 status.go:174] checking status of multinode-759900 ...
	I1209 23:52:45.783439  420525 cli_runner.go:164] Run: docker container inspect multinode-759900 --format={{.State.Status}}
	I1209 23:52:45.801741  420525 status.go:371] multinode-759900 host status = "Stopped" (err=<nil>)
	I1209 23:52:45.801769  420525 status.go:384] host is not running, skipping remaining checks
	I1209 23:52:45.801776  420525 status.go:176] multinode-759900 status: &{Name:multinode-759900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:52:45.801808  420525 status.go:174] checking status of multinode-759900-m02 ...
	I1209 23:52:45.802118  420525 cli_runner.go:164] Run: docker container inspect multinode-759900-m02 --format={{.State.Status}}
	I1209 23:52:45.824272  420525 status.go:371] multinode-759900-m02 host status = "Stopped" (err=<nil>)
	I1209 23:52:45.824292  420525 status.go:384] host is not running, skipping remaining checks
	I1209 23:52:45.824299  420525 status.go:176] multinode-759900-m02 status: &{Name:multinode-759900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-759900 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-759900 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (56.472734119s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-759900 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-759900
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-759900-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-759900-m02 --driver=docker  --container-runtime=crio: exit status 14 (85.089521ms)

                                                
                                                
-- stdout --
	* [multinode-759900-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-759900-m02' is duplicated with machine name 'multinode-759900-m02' in profile 'multinode-759900'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-759900-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-759900-m03 --driver=docker  --container-runtime=crio: (29.722520038s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-759900
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-759900: exit status 80 (341.870048ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-759900 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-759900-m03 already exists in multinode-759900-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-759900-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-759900-m03: (2.041522841s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.25s)

                                                
                                    
x
+
TestPreload (129.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-532369 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1209 23:54:22.098169  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-532369 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m34.06246734s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-532369 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-532369 image pull gcr.io/k8s-minikube/busybox: (3.315372663s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-532369
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-532369: (5.796458288s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-532369 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-532369 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (23.379896526s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-532369 image list
helpers_test.go:175: Cleaning up "test-preload-532369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-532369
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-532369: (2.385151973s)
--- PASS: TestPreload (129.24s)

                                                
                                    
x
+
TestScheduledStopUnix (105.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-799247 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-799247 --memory=2048 --driver=docker  --container-runtime=crio: (29.306461765s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-799247 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-799247 -n scheduled-stop-799247
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-799247 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1209 23:56:58.478475  297827 retry.go:31] will retry after 131.603µs: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.479643  297827 retry.go:31] will retry after 125.11µs: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.480771  297827 retry.go:31] will retry after 293.135µs: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.481927  297827 retry.go:31] will retry after 444.05µs: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.483030  297827 retry.go:31] will retry after 677.332µs: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.484159  297827 retry.go:31] will retry after 1.011623ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.485299  297827 retry.go:31] will retry after 1.313642ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.487453  297827 retry.go:31] will retry after 2.062574ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.489612  297827 retry.go:31] will retry after 3.37006ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.495018  297827 retry.go:31] will retry after 4.997404ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.500372  297827 retry.go:31] will retry after 5.895845ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.506684  297827 retry.go:31] will retry after 5.338158ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.513435  297827 retry.go:31] will retry after 19.360344ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.533772  297827 retry.go:31] will retry after 19.152986ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.554054  297827 retry.go:31] will retry after 19.997944ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
I1209 23:56:58.574578  297827 retry.go:31] will retry after 59.804902ms: open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/scheduled-stop-799247/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-799247 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-799247 -n scheduled-stop-799247
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-799247
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-799247 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-799247
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-799247: exit status 7 (75.472331ms)

                                                
                                                
-- stdout --
	scheduled-stop-799247
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-799247 -n scheduled-stop-799247
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-799247 -n scheduled-stop-799247: exit status 7 (73.059601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-799247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-799247
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-799247: (4.518293189s)
--- PASS: TestScheduledStopUnix (105.46s)

                                                
                                    
x
+
TestInsufficientStorage (10.12s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-054185 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-054185 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.624762458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"74224eb8-6c58-4b83-b32c-5dbbec9907ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-054185] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7856428f-46e4-43c9-9465-d0d6efea01c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19888"}}
	{"specversion":"1.0","id":"bf310ab5-d6b3-4d03-9cac-524447c22a97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5dc9718c-fbea-4ac1-b485-2fa09260a8e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig"}}
	{"specversion":"1.0","id":"78fd1b47-9ad8-4be1-b447-818cef73200e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube"}}
	{"specversion":"1.0","id":"8edb2287-488a-4488-aea9-ebe35268936d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f40f282a-57a3-4325-8f55-1aa90b0d0814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"deb2e229-02a2-438a-a6e3-f0f0e010b605","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f3216014-052e-4a46-b12d-261dcb00d2ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d75fde7b-ce91-4b60-8b52-1f196baf405f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce7aa26d-e708-4167-b0ae-4518189b7fd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ff718254-f453-490e-ae57-98c7d48fbec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-054185\" primary control-plane node in \"insufficient-storage-054185\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"61cca49f-698f-4aa2-b646-5a8c7c0c3224","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8548c424-4523-449d-9957-703432d4fac0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1929e985-65c7-4714-9b9a-963f0772f421","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-054185 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-054185 --output=json --layout=cluster: exit status 7 (290.731283ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-054185","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-054185","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:58:21.987929  438258 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-054185" does not appear in /home/jenkins/minikube-integration/19888-292449/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-054185 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-054185 --output=json --layout=cluster: exit status 7 (288.408753ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-054185","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-054185","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:58:22.279159  438321 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-054185" does not appear in /home/jenkins/minikube-integration/19888-292449/kubeconfig
	E1209 23:58:22.289752  438321 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/insufficient-storage-054185/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-054185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-054185
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-054185: (1.915959262s)
--- PASS: TestInsufficientStorage (10.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (76.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.692201864 start -p running-upgrade-813700 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1210 00:02:25.168098  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.692201864 start -p running-upgrade-813700 --memory=2200 --vm-driver=docker  --container-runtime=crio: (42.97241346s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-813700 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-813700 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.471234657s)
helpers_test.go:175: Cleaning up "running-upgrade-813700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-813700
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-813700: (3.004016371s)
--- PASS: TestRunningBinaryUpgrade (76.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (150.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-692186 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-692186 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.784137868s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-692186
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-692186: (1.381082127s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-692186 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-692186 status --format={{.Host}}: exit status 7 (103.088138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-692186 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-692186 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.152876234s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-692186 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-692186 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-692186 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (94.549731ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-692186] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-692186
	    minikube start -p kubernetes-upgrade-692186 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6921862 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-692186 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-692186 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-692186 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.585783076s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-692186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-692186
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-692186: (3.577849974s)
--- PASS: TestKubernetesUpgrade (150.78s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.207574221 start -p missing-upgrade-836691 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.207574221 start -p missing-upgrade-836691 --memory=2200 --driver=docker  --container-runtime=crio: (1m25.583318648s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-836691
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-836691: (11.652609337s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-836691
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-836691 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-836691 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.595882946s)
helpers_test.go:175: Cleaning up "missing-upgrade-836691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-836691
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-836691: (2.429539528s)
--- PASS: TestMissingContainerUpgrade (164.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-051454 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-051454 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (96.415587ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-051454] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-051454 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-051454 --driver=docker  --container-runtime=crio: (39.779679818s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-051454 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-051454 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-051454 --no-kubernetes --driver=docker  --container-runtime=crio: (6.139399921s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-051454 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-051454 status -o json: exit status 2 (304.570122ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-051454","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-051454
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-051454: (2.164552054s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-051454 --no-kubernetes --driver=docker  --container-runtime=crio
E1209 23:59:18.724368  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-051454 --no-kubernetes --driver=docker  --container-runtime=crio: (8.284855311s)
--- PASS: TestNoKubernetes/serial/Start (8.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-051454 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-051454 "sudo systemctl is-active --quiet service kubelet": exit status 1 (329.871764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
E1209 23:59:22.098551  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-051454
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-051454: (1.267187536s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-051454 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-051454 --driver=docker  --container-runtime=crio: (7.821054658s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-051454 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-051454 "sudo systemctl is-active --quiet service kubelet": exit status 1 (322.428072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2193999339 start -p stopped-upgrade-991743 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2193999339 start -p stopped-upgrade-991743 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.542058303s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2193999339 -p stopped-upgrade-991743 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2193999339 -p stopped-upgrade-991743 stop: (3.084568009s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-991743 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-991743 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.081045382s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (78.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-991743
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-991743: (1.270332712s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                    
x
+
TestPause/serial/Start (64.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-492514 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-492514 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m4.088015058s)
--- PASS: TestPause/serial/Start (64.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (20.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-492514 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-492514 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.424300515s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (20.48s)

                                                
                                    
x
+
TestPause/serial/Pause (1.37s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-492514 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-492514 --alsologtostderr -v=5: (1.366962897s)
--- PASS: TestPause/serial/Pause (1.37s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-492514 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-492514 --output=json --layout=cluster: exit status 2 (439.99134ms)

                                                
                                                
-- stdout --
	{"Name":"pause-492514","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-492514","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.08s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-492514 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-492514 --alsologtostderr -v=5: (1.076948968s)
--- PASS: TestPause/serial/Unpause (1.08s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-492514 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-492514 --alsologtostderr -v=5: (1.832845914s)
--- PASS: TestPause/serial/PauseAgain (1.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-492514 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-492514 --alsologtostderr -v=5: (3.24545396s)
--- PASS: TestPause/serial/DeletePaused (3.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-492514
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-492514: exit status 1 (23.311111ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-492514: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-441479 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-441479 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (626.113373ms)

                                                
                                                
-- stdout --
	* [false-441479] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:04:09.865498  474032 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:04:09.865652  474032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:04:09.865660  474032 out.go:358] Setting ErrFile to fd 2...
	I1210 00:04:09.865666  474032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:04:09.865903  474032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-292449/.minikube/bin
	I1210 00:04:09.866341  474032 out.go:352] Setting JSON to false
	I1210 00:04:09.867292  474032 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9991,"bootTime":1733779059,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 00:04:09.867372  474032 start.go:139] virtualization:  
	I1210 00:04:09.910637  474032 out.go:177] * [false-441479] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1210 00:04:09.941429  474032 out.go:177]   - MINIKUBE_LOCATION=19888
	I1210 00:04:09.941433  474032 notify.go:220] Checking for updates...
	I1210 00:04:09.973777  474032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:04:10.006874  474032 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-292449/kubeconfig
	I1210 00:04:10.031191  474032 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-292449/.minikube
	I1210 00:04:10.063090  474032 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 00:04:10.110221  474032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:04:10.141781  474032 config.go:182] Loaded profile config "force-systemd-env-085786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:04:10.141901  474032 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:04:10.165979  474032 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1210 00:04:10.166108  474032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:04:10.234235  474032 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:47 SystemTime:2024-12-10 00:04:10.223498026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1210 00:04:10.234360  474032 docker.go:318] overlay module found
	I1210 00:04:10.265977  474032 out.go:177] * Using the docker driver based on user configuration
	I1210 00:04:10.296797  474032 start.go:297] selected driver: docker
	I1210 00:04:10.296825  474032 start.go:901] validating driver "docker" against <nil>
	I1210 00:04:10.296841  474032 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:04:10.329330  474032 out.go:201] 
	W1210 00:04:10.363480  474032 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1210 00:04:10.396021  474032 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-441479 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-441479" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-441479

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-441479"

                                                
                                                
----------------------- debugLogs end: false-441479 [took: 5.067801887s] --------------------------------
helpers_test.go:175: Cleaning up "false-441479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-441479
--- PASS: TestNetworkPlugins/group/false (5.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (152.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-052715 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1210 00:07:21.805914  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-052715 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m32.581241833s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (152.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-052715 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [56e38196-cde3-4b2d-8aa1-c87935527162] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [56e38196-cde3-4b2d-8aa1-c87935527162] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004339135s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-052715 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-052715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-052715 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-052715 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-052715 --alsologtostderr -v=3: (12.014414048s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-052715 -n old-k8s-version-052715
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-052715 -n old-k8s-version-052715: exit status 7 (98.611262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-052715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (149.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-052715 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-052715 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m29.464713497s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-052715 -n old-k8s-version-052715
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (149.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-317795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 00:09:18.722415  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:09:22.097907  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-317795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (1m11.770920295s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-317795 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [caf14d43-0bde-44dd-9223-2329bf6486b7] Pending
helpers_test.go:344: "busybox" [caf14d43-0bde-44dd-9223-2329bf6486b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [caf14d43-0bde-44dd-9223-2329bf6486b7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004708138s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-317795 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-317795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-317795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032186751s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-317795 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-317795 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-317795 --alsologtostderr -v=3: (12.033193752s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-317795 -n no-preload-317795
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-317795 -n no-preload-317795: exit status 7 (76.431524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-317795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-317795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-317795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m59.846240441s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-317795 -n no-preload-317795
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-drj4n" [19f91186-015d-41d4-8728-a791e80b3c19] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003558884s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-drj4n" [19f91186-015d-41d4-8728-a791e80b3c19] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004292543s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-052715 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-052715 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-052715 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-052715 -n old-k8s-version-052715
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-052715 -n old-k8s-version-052715: exit status 2 (330.762034ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-052715 -n old-k8s-version-052715
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-052715 -n old-k8s-version-052715: exit status 2 (339.578796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-052715 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-052715 -n old-k8s-version-052715
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-052715 -n old-k8s-version-052715
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-466436 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-466436 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (53.361363489s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-466436 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7434ae46-30fd-4755-ad3f-279363ce0f36] Pending
helpers_test.go:344: "busybox" [7434ae46-30fd-4755-ad3f-279363ce0f36] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7434ae46-30fd-4755-ad3f-279363ce0f36] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004230682s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-466436 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-466436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-466436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014303205s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-466436 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-466436 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-466436 --alsologtostderr -v=3: (11.956591325s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-466436 -n embed-certs-466436
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-466436 -n embed-certs-466436: exit status 7 (78.209684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-466436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (292.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-466436 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 00:12:56.424177  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:56.430685  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:56.442234  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:56.463797  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:56.505308  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:56.586694  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:56.748383  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:57.070256  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:57.712066  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:58.993476  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:13:01.554802  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:13:06.676977  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:13:16.918978  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:13:37.400312  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:14:18.362034  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:14:18.721889  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:14:22.098273  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-466436 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m51.813844947s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-466436 -n embed-certs-466436
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (292.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t4n5x" [7bcf3de4-3225-4ab8-9a67-df2b24a8784d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004089837s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t4n5x" [7bcf3de4-3225-4ab8-9a67-df2b24a8784d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004834684s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-317795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-317795 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-317795 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-317795 -n no-preload-317795
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-317795 -n no-preload-317795: exit status 2 (338.934794ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-317795 -n no-preload-317795
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-317795 -n no-preload-317795: exit status 2 (332.257607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-317795 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-317795 -n no-preload-317795
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-317795 -n no-preload-317795
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-948709 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 00:15:40.283391  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-948709 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (50.802640551s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-948709 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ad461d4f-8459-447b-9415-61a4fd1f3591] Pending
helpers_test.go:344: "busybox" [ad461d4f-8459-447b-9415-61a4fd1f3591] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ad461d4f-8459-447b-9415-61a4fd1f3591] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.003798686s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-948709 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-948709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-948709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030022076s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-948709 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-948709 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-948709 --alsologtostderr -v=3: (12.019451309s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-948709 -n default-k8s-diff-port-948709
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-948709 -n default-k8s-diff-port-948709: exit status 7 (85.423369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-948709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-948709 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-948709 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m56.604857713s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-948709 -n default-k8s-diff-port-948709
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vg9nm" [c1c1d07b-5e1e-4a14-8f1f-2b3815bd088f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004332211s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vg9nm" [c1c1d07b-5e1e-4a14-8f1f-2b3815bd088f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004910202s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-466436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-466436 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-466436 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-466436 -n embed-certs-466436
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-466436 -n embed-certs-466436: exit status 2 (388.586811ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-466436 -n embed-certs-466436
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-466436 -n embed-certs-466436: exit status 2 (342.33029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-466436 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-466436 -n embed-certs-466436
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-466436 -n embed-certs-466436
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-308706 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 00:17:56.423633  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-308706 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (35.621106637s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-308706 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-308706 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.475005835s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-308706 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-308706 --alsologtostderr -v=3: (1.303051482s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-308706 -n newest-cni-308706
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-308706 -n newest-cni-308706: exit status 7 (79.365064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-308706 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-308706 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 00:18:24.125595  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-308706 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (15.096295656s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-308706 -n newest-cni-308706
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-308706 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-308706 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-308706 -n newest-cni-308706
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-308706 -n newest-cni-308706: exit status 2 (341.994243ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-308706 -n newest-cni-308706
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-308706 -n newest-cni-308706: exit status 2 (331.547872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-308706 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-308706 -n newest-cni-308706
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-308706 -n newest-cni-308706
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1210 00:19:05.169546  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:19:18.721665  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:19:22.097996  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (54.107524161s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-441479 "pgrep -a kubelet"
I1210 00:19:30.308446  297827 config.go:182] Loaded profile config "auto-441479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-441479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g4xp8" [98d5b5da-3440-41fc-af83-8a123a79069a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g4xp8" [98d5b5da-3440-41fc-af83-8a123a79069a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004898102s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-441479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1210 00:20:05.598703  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/no-preload-317795/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:20:26.080753  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/no-preload-317795/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (48.375714796s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xs2nq" [4578a447-3398-40f6-9f37-fbb2a36a3738] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003896193s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-441479 "pgrep -a kubelet"
I1210 00:20:56.711159  297827 config.go:182] Loaded profile config "kindnet-441479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-441479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-st72n" [307cb98f-251e-43e6-909d-13dc5bf30a31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-st72n" [307cb98f-251e-43e6-909d-13dc5bf30a31] Running
E1210 00:21:07.042066  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/no-preload-317795/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003582571s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-441479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.768774407s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bbdmv" [1de35b3b-a3ee-4590-a513-c9d8dcedbd1e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004019777s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bbdmv" [1de35b3b-a3ee-4590-a513-c9d8dcedbd1e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006024485s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-948709 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-948709 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-948709 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-948709 --alsologtostderr -v=1: (1.181394883s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-948709 -n default-k8s-diff-port-948709
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-948709 -n default-k8s-diff-port-948709: exit status 2 (542.651324ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-948709 -n default-k8s-diff-port-948709
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-948709 -n default-k8s-diff-port-948709: exit status 2 (410.124733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-948709 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-948709 --alsologtostderr -v=1: (1.156300986s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-948709 -n default-k8s-diff-port-948709
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-948709 -n default-k8s-diff-port-948709
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.73s)
E1210 00:25:50.424366  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:50.430881  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:50.442272  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:50.463880  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:50.505420  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:50.587212  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:50.748583  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:51.070304  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:51.712340  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:52.501710  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:52.994313  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:55.556601  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:00.678178  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:10.919688  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:17.681275  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:17.687788  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:17.699293  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:17.720739  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:17.762241  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:17.843693  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:18.005483  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:18.330453  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:18.972661  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:20.254477  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:22.816876  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:27.938595  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:31.401297  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/kindnet-441479/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1210 00:22:28.963836  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/no-preload-317795/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.53888335s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6jtv9" [4ae11aa6-f85b-4826-90d4-a33f15e0c99b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.010299905s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-441479 "pgrep -a kubelet"
I1210 00:22:46.794929  297827 config.go:182] Loaded profile config "calico-441479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-441479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ks2lw" [79965931-9077-4d8e-832a-3e0a90337c93] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ks2lw" [79965931-9077-4d8e-832a-3e0a90337c93] Running
E1210 00:22:56.424147  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/old-k8s-version-052715/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004264222s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-441479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-441479 "pgrep -a kubelet"
I1210 00:23:02.065432  297827 config.go:182] Loaded profile config "custom-flannel-441479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-441479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sg557" [f382857b-598f-4241-b96c-ae46058f56ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sg557" [f382857b-598f-4241-b96c-ae46058f56ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006446738s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-441479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m22.235110088s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1210 00:24:01.808336  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:18.722371  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:22.097500  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/addons-006125/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:30.562694  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:30.569135  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:30.581455  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:30.602942  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:30.644349  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:30.725970  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:30.887610  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:31.209206  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:31.850746  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:33.132168  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:35.693797  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.012736915s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xbmw4" [793d2463-ac4b-4faf-83f4-61151f0d4298] Running
E1210 00:24:40.815754  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:24:45.104434  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/no-preload-317795/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.009128373s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-441479 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-441479 "pgrep -a kubelet"
I1210 00:24:45.739497  297827 config.go:182] Loaded profile config "flannel-441479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-441479 replace --force -f testdata/netcat-deployment.yaml
I1210 00:24:45.816188  297827 config.go:182] Loaded profile config "enable-default-cni-441479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tkqmm" [a7cebb12-e2a1-4cb8-b1ef-6fe05c17af45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tkqmm" [a7cebb12-e2a1-4cb8-b1ef-6fe05c17af45] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.003643268s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-441479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s7t5s" [10338353-69be-40f2-b9e5-89e970b09803] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 00:24:51.057990  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/auto-441479/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-s7t5s" [10338353-69be-40f2-b9e5-89e970b09803] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004078811s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-441479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-441479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-441479 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.419378812s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-441479 "pgrep -a kubelet"
I1210 00:26:34.517943  297827 config.go:182] Loaded profile config "bridge-441479": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-441479 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x4llc" [b7b280dc-8f04-4b9c-b53b-73174bf7e6a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 00:26:38.179940  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/default-k8s-diff-port-948709/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-x4llc" [b7b280dc-8f04-4b9c-b53b-73174bf7e6a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004393718s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-441479 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-441479 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-257686 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-257686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-257686
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-006125 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-666858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-666858
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-441479 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-441479" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-441479

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-441479"

                                                
                                                
----------------------- debugLogs end: kubenet-441479 [took: 4.920286015s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-441479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-441479
--- SKIP: TestNetworkPlugins/group/kubenet (5.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1210 00:04:18.723171  297827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-292449/.minikube/profiles/functional-648515/client.crt: no such file or directory" logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: cilium-441479 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-441479" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-441479

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-441479" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-441479"

                                                
                                                
----------------------- debugLogs end: cilium-441479 [took: 5.079097557s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-441479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-441479
--- SKIP: TestNetworkPlugins/group/cilium (5.30s)

                                                
                                    
Copied to clipboard