rfc/patch: debuginfod client $DEBUGINFOD_PROGRESS env var
Frank Ch. Eigler
Mon Dec 23 01:39:00 GMT 2019
> There is of course some i/o delay involved. But the majority of the
> time is cpu bound because the file needs to be decompressed.
> Not that it would help us now, but I wonder if it would be worth it to
> look at parallel compression/decompression to speed things up.
> picking 90 seconds because that seems twice the worse case time to
> decompress and that gives it about 45 seconds to provide ~10K/sec. But
> if you are seeing 60 seconds as worse case we could pick something like
> 120 seconds or something.
That's a possibility.
> But it should probably be a separate timeout from the connection
> timeout, and maybe from the total timeout (or maybe replace
> it?). What do you think?
Yeah, a connection timeout per se is probably not really worth having.
A URL having unreasolvable hosts will fail immediately. A reachable
http server that is fairly busy will connect, just take time. The
only common cases a connection timeout would catch is a running http
server that is so overloaded that it can't even service its accept(4)
backlog, or a nonexistent one that has been tarpit/firewalled. A
minimal progress timeout can subsume cases too.
OTOH, it's worth noting that these requests only take this kind of
time if they are being seriously serviced, i.e., "they are worth it".
Error cases fail relatively quickly. It's the success cases - and
these huge vmlinux files - that take time. And once the data starts
flowing - at all - the rest will follow as rapidly as the network
That suggests one timeout could be sufficient - the progress timeout
you the one you found - just not too short and not too fast.
More information about the Elfutils-devel