This is the mail archive of the mailing list for the GDB project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFA] Rewrite data cache and use for stack access.

On Mon, Jun 29, 2009 at 12:00:54PM -0700, Jacob Potter wrote:
> Here's a patch to clean up the data cache code and add an option to
> use it for all stack accesses. It's in two parts, first the rewrite of
> the cache itself, and then the changes to other code to use it.

I like the idea and your approach. (Doug tells me that I had something
to do with said idea, but I don't remember :-).  Thanks for working
on this!

> The first part of the patch makes the cache set-associative and
> write-through. As originally implemented, the cache had writeback
> capability; data could be written to the cache and marked as not
> having been written to the target yet. However, all the code paths
> through the cache actually wrote to the target immediately after each
> write command to the cache, so there was no performance advantage and
> significant complexity to deal with writeback. Making the cache
> set-associative rather than fully-associative allows the size to be
> increased significantly without performance penalties. Previously, a
> cache lookup cost O(n) as the size of the cache.

I think that part of the trouble with the existing cache is that it's
implemented too much like a cache.  Rather than making it
set-associative, and thus (marginally?) less effective, what about
fixing the search to use a more efficient structure?

What we have today is a linked list of blocks.  If we put them into a
splay tree instead, search performance would be much better.  It would
be very similar to addrmap.c's splay trees.

> Figuring out when to cache is tough.There are plenty of situations
> where it's clearly not OK to cache data on GDB's end, most importantly
> memory mapped I/O. However, we know that if a given access is known to
> be on the stack, it's not going to lie in I/O or otherwise special
> memory and it's somewhat less likely to even be shared among threads.
> Building a backtrace requires a lot of small accesses all referring to
> the stack, so caching them accelerates remote debugging significantly
> without affecting correctness.

The other especially useful place is code analysis.  If code accesses
were marked appropriately, we could redirect them straight to the
executable (mild risk, see my recent conversation with Pedro about
fix-and-continue style patching) or to the data cache (almost no
risk).  Anyway this is clearly an issue for another patch.

For this patch:

* I'd find it helpful if any performance improvements were separated
out from stack caching.  Could you do that?

* Have you thought at all about non-stop or multi-process debugging?
If we have a data cache which is specifically for stack accesses,
maybe we should associate it with the thread.

* Do we really need an option to turn this off?  It seems to me to be
risk-free; whether the target memory ends up volatile or not, we
don't want volatile semantics on saved registers anyway.

* If we do add the option, it requires documentation.  Whether we do
or not, please add an entry to NEWS about the new feature.

* We'd prefer that new functions take a target_ops; are the
current_target versions of read_stack and target_stack necessary?

> +extern struct value *value_at_lazy_stack (struct type *type, CORE_ADDR addr);

IMO this one isn't needed; just call value_set_stack on the result
of value_at_lazy, right?

Daniel Jacobowitz

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]