Bug 27859

Summary: reused debuginfod_client objects don't clean out curl handles enough
Product: elfutils Reporter: Frank Ch. Eigler <fche>
Component: debuginfodAssignee: Frank Ch. Eigler <fche>
Status: RESOLVED FIXED    
Severity: normal CC: elfutils-devel
Priority: P2    
Version: unspecified   
Target Milestone: ---   
See Also: https://sourceware.org/bugzilla/show_bug.cgi?id=27701
Host: Target:
Build: Last reconfirmed:

Description Frank Ch. Eigler 2021-05-13 01:26:42 UTC
Not long after deploying the new 0.184 release in anger, amerey noticed that it was possible for a federating debuginfod to report 404's on queries that its upstream can readily satisfy.  Further digging and testing indicates that this is not related to the 000 negative caching, but rather some sort of error-latching effect with the curl handles.

In a sequence of queries on the same debuginfod_client, as long as they are all successful, things are fine.  Once there is a 404 error however, this appears to latch, and subsequent requests give 404 whether or not they were resolvable by upstream.
Comment 1 Mark Wielaard 2021-05-14 13:16:41 UTC
On Thu, May 13, 2021 at 01:26:42AM +0000, fche at redhat dot com via Elfutils-devel wrote:
> https://sourceware.org/bugzilla/show_bug.cgi?id=27859
>
> In a sequence of queries on the same debuginfod_client, as long as
> they are all successful, things are fine.  Once there is a 404 error
> however, this appears to latch, and subsequent requests give 404
> whether or not they were resolvable by upstream.

Makes sense that curl remembers 404 results. Does that mean we need to
refresh the curl handle when a request is made for a negative cached
entry and cache_miss_s expires?
Comment 2 Frank Ch. Eigler 2021-05-16 21:51:51 UTC
commit 0b454c7e1997 fixes this.