DEBUGINFOD_TIMEOUT is a good way to catch servers that are too slow to *start* transmitting a file. But we have no way of limiting total download time or space. A user might prefer to have his debugger fetch only quick & small files, and make do without the bigger ones. Some transitive dependencies of e.g. gnome programs are huge: 3GB of LLVM debuginfo, 1GB of webkitgtk, etc. etc. We could add a $DEBUGINFOD_MAXSIZE and/or $DEBUGINFOD_MAXTIME parameters to the client side environment variable suite. The MAXSIZE one could be communicated to the server in the query as an extra header, so it can quickly respond with some HTTP error code (since it can generally find out the destination file sizes prior to actually decompressing them); it can also be enforced during download, as soon as a Content-Length: header is received. The MAXTIME one could be added as a debuginfod-client.c main loop parameter.
possible representation in the apis: -> $DEBUGINFOD_MAXSIZE (in bytes) -> outgoing request header X-DEBUGINFOD-MAXSIZE: (number) <- http response code 406 (Not Acceptable) if rejected <- posix API rc EFBIG (File too large) (don't cache as 000 negative-hit)
commit 72a6f9d6f4280a50631b475e620f9c7858d9f4b5 Author: Noah Sanci <nsanci@redhat.com> Date: Mon Jul 26 13:29:11 2021 -0400 debuginfod: PR27982 - added DEBUGINFOD_MAXSIZE and DEBUGINFOD_MAXTIME