Not long after deploying the new 0.184 release in anger, amerey noticed that it was possible for a federating debuginfod to report 404's on queries that its upstream can readily satisfy. Further digging and testing indicates that this is not related to the 000 negative caching, but rather some sort of error-latching effect with the curl handles. In a sequence of queries on the same debuginfod_client, as long as they are all successful, things are fine. Once there is a 404 error however, this appears to latch, and subsequent requests give 404 whether or not they were resolvable by upstream.
On Thu, May 13, 2021 at 01:26:42AM +0000, fche at redhat dot com via Elfutils-devel wrote: > https://sourceware.org/bugzilla/show_bug.cgi?id=27859 > > In a sequence of queries on the same debuginfod_client, as long as > they are all successful, things are fine. Once there is a 404 error > however, this appears to latch, and subsequent requests give 404 > whether or not they were resolvable by upstream. Makes sense that curl remembers 404 results. Does that mean we need to refresh the curl handle when a request is made for a negative cached entry and cache_miss_s expires?
commit 0b454c7e1997 fixes this.