Reasonable concerns have been raised about whether a debuginfod client has any way of verifying that artifacts downloaded are unmodified / still trustworthy. This is a good question because any package-level signature protection is stripped at the server, when we serve constituent files in isolation.
As transport over HTTPS protects the content, we can safely assume that immediately during/after the download, the content will be fine. However, what of cached files? What if some program changes the cache contents sometime between download and a much later reuse? (Note that this threat model is not that serious, since any tool that could modify cache contents can probably also modify dot files etc., and take over the user's account.)
But anyway, as a trust/comfort measure, we could provide limited verification of cached content, without having to fully download again. Here's one possible way:
- all this being conditional on a client-side environment variable like $DEBUGINFOD_VERIFY being set
- in the -client.c code, during a find operation, if there is a cache hit, the client will STILL make a connection to the upstream $DEBUGINFOD_URLS, but only with a HEAD query, otherwise same webapi
- the server code, upon seeing the HEAD query, will return additional response headers
- one of these response headers will be X-Debuginfod-Hash: XYZXYZXYZ, which would be some securish hash of the content, probably sha256 or such
- the server will compute / cache this hash in a new sqlite table, akin to the buildids9_file_mtime_scanned, for each file over time, subject to grooming as usual
- how federated servers do this w.r.t serving from their own cache: TBD
- the client will look for this response header from all servers that return 200
- if no server returns this header (maybe because it's just old, or they don't happen to have the hash cached or such), and if $DEBUGINFOD_VERIFY value is "permissive", result -> PASS, return
- the client will pick ANY or ALL (maybe depending on bug #25607 policy?)
- the client will compute the same hash function on the cached content, and compare
- if the local hash mismatches the server-provided hash, warn via $DEBUGINFOD_VERBOSE, delete local cached object, perform full download
- otherwise: result -> PASS, return
In the elfutils profile.d/debuginfod.* files, distro policy could set $DEBUGINFOD_VERIFY=enforcing or =permissive or (none) differently for root and/or less privileged users.
I don't think this protection is particularly interesting. Normal file system permissions should be used to safeguard any files in the home directory. I don't see a reason to try to add a verification layer on top. And if it was added, it cannot be effective anyway.
Yeah. It may comfort those who are worried about the integrity of their previously downloaded cached files, but is not robust against local attacker who currently has control over the filesystem or processes.
Instead of `X-Debuginfod-Hash` you can use `ETag` where you can put anything including sha256 (can be prescribed in webapi description), then GET request with `If-None-Match` + tag value (which is a hash) will return just 304 if the hash is not changed. So HEAD request is not needed too.
And it should be possible to use the Content-Length header to verify that the data does not have an excessive size (something that is not possible with just the hash itself).
(In reply to Vitaly Chikunov from comment #3)
> Instead of `X-Debuginfod-Hash` you can use `ETag` where you can put anything
> including sha256 (can be prescribed in webapi description), then GET request
> with `If-None-Match` + tag value (which is a hash) will return just 304 if
> the hash is not changed. So HEAD request is not needed too.
That's a good idea, except in the case of an older [current] debuginfod that doesn't understand If-None-Match, and would just resend the entire content every time. But at least it's not a security problem, just a performance one.
putting idea on ice