Bug 27859 - reused debuginfod_client objects don't clean out curl handles enough
Summary: reused debuginfod_client objects don't clean out curl handles enough
Status: RESOLVED FIXED
Alias: None
Product: elfutils
Classification: Unclassified
Component: debuginfod (show other bugs)
Version: unspecified
: P2 normal
Target Milestone: ---
Assignee: Frank Ch. Eigler
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2021-05-13 01:26 UTC by Frank Ch. Eigler
Modified: 2021-05-16 21:51 UTC (History)
1 user (show)

See Also:
Host:
Target:
Build:
Last reconfirmed:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Frank Ch. Eigler 2021-05-13 01:26:42 UTC
Not long after deploying the new 0.184 release in anger, amerey noticed that it was possible for a federating debuginfod to report 404's on queries that its upstream can readily satisfy.  Further digging and testing indicates that this is not related to the 000 negative caching, but rather some sort of error-latching effect with the curl handles.

In a sequence of queries on the same debuginfod_client, as long as they are all successful, things are fine.  Once there is a 404 error however, this appears to latch, and subsequent requests give 404 whether or not they were resolvable by upstream.
Comment 1 Mark Wielaard 2021-05-14 13:16:41 UTC
On Thu, May 13, 2021 at 01:26:42AM +0000, fche at redhat dot com via Elfutils-devel wrote:
> https://sourceware.org/bugzilla/show_bug.cgi?id=27859
>
> In a sequence of queries on the same debuginfod_client, as long as
> they are all successful, things are fine.  Once there is a 404 error
> however, this appears to latch, and subsequent requests give 404
> whether or not they were resolvable by upstream.

Makes sense that curl remembers 404 results. Does that mean we need to
refresh the curl handle when a request is made for a negative cached
entry and cache_miss_s expires?
Comment 2 Frank Ch. Eigler 2021-05-16 21:51:51 UTC
commit 0b454c7e1997 fixes this.