This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Always cache memory and registers


On Sun, Jun 22, 2003 at 06:26:13PM -0400, Andrew Cagney wrote:
> Hello,
> 
> Think back to the rationale for GDB simply flushing its entire state 
> after the user modifies a memory or register.   No matter how inefficent 
> that update is, it can't be any worse than the full refresh needed after 
> a single step.  All effort should be put into making single step fast, 
> and not into making read-modifywrite fast.
> 
> I think I've just found a similar argument that can be used to justify 
> always enabling a data cache.  GDB's dcache is currently disabled (or at 
> least was the last time I looked :-).  The rationale was that the user, 
> when inspecting in-memory devices, would be confused if repeated reads 
> did not reflect the devices current register values.
> 
> The problem with this is GUIs.
> 
> A GUI can simultaneously display multiple views of the same memory 
> region.  Should each of those displays generate separate target reads 
> (with different values and side effects) or should they all share a 
> common cache?
> 
> I think the later because it is impossible, from a GUI, to predict or 
> control the number of reads that request will trigger.  Hence I'm 
> thinking that a data cache should be enabled by default.

Good reasoning.  I like it.

> The only proviso being that the the current cache and target vector 
> would need to be modified so that the cache only ever requested the data 
> needed, leaving it to the target to supply more if available (much like 
> registers do today).  The current dcache doesn't do this, it instead 
> pads out small reads :-(

It needs tweaking for other reasons too.  It should probably have a
much higher threshold before it starts throwing out data, for one
thing.

Padding out small reads isn't such a bad idea.  It generally seems to
be the latency that's a real problem, esp. for remote targets.  I think
both NetBSD and GNU/Linux do fast bulk reads native now?  I'd almost
want to increase the padding.

> One thing that could be added to this is the idea of a sync point.
> When supplying data, the target could mark it as volatile.  Such 
> volatile data would then be drawn from the cache but only up until the 
> next sync point.  After that a fetch would trigger a new read. 
> Returning to the command line, for instance, could be a sync point. 
> Individual x/i commands on a volatile region would be separated by sync 
> points, and hence would trigger separate reads.
> 
> Thoughts.  I think this provides at least one techical reason for 
> enabling the cache.

Interesting idea there.  I'm not quite sure how much work vs. return it
would be.

-- 
Daniel Jacobowitz
MontaVista Software                         Debian GNU/Linux Developer


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]