This is the mail archive of the elfutils-devel@sourceware.org mailing list for the elfutils project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: rfc/patch: debuginfod client $DEBUGINFOD_PROGRESS env var


Hi Frank,

On Wed, 2019-12-18 at 19:47 -0500, Frank Ch. Eigler wrote:
> [...]
> > I would add something like:
> > 
> >   /* Make sure there is at least some progress,
> >      try to get at least 1K per progress timeout seconds.  */
> >   curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, 5 * 1024L);
> >   curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, progress_timeout);
> > 
> > The idea being that if we didn't at least get 1K per 5 seconds then the
> > connection is just so bad that it doesn't make sense to wait for it to
> > finish, since that will most likely be forever (or feel like it for the
> > user).

Note that the comment and the pseudo code are off. That "5 *" shouldn't
be there in the code.

> The problem with that is that, for a large download such as a kernel,
> it can take almost a minute to just decompress the kernel-debuginfo
> rpm far enough to start streaming the vmlinux file.  (In the presene
> of caching somewhere in the http proxy tree, it gets much better the
> second+ time.)  So any small default would be counterproductive to
> e.g. systemtap users: they'd be forced to override this for basic
> usage.

I can see how 5 seconds might be too low in such a case. But a whole
minute surprises me. But... indeed I tried myself and although not a
whole minute, it does seem to take more than 40 seconds. Most of it is
spend in rpm2cpio (I even tries a python implementation to compare).
There is of course some i/o delay involved. But the majority of the
time is cpu bound because the file needs to be decompressed.

Not that it would help us now, but I wonder if it would be worth it to
look at parallel compression/decompression to speed things up.

So 5 seconds no progress seems too low. But I still don't like infinite
as default. It seems unreasonable to let the user just wait
indefinitely when the connection seem stuck. How about saying something
like you need to get at least 450K in 90 seconds as default? I am
picking 90 seconds because that seems twice the worse case time to
decompress and that gives it about 45 seconds to provide ~10K/sec. But
if you are seeing 60 seconds as worse case we could pick something like
120 seconds or something.

But it should probably be a separate timeout from the connection
timeout, and maybe from the total timeout (or maybe replace it?). What
do you think?

Cheers,

Mark


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]