This is the mail archive of the gdb-prs@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

pending/1287: Always cache memory and registers


>Number:         1287
>Category:       pending
>Synopsis:       Always cache memory and registers
>Confidential:   yes
>Severity:       serious
>Priority:       medium
>Responsible:    unassigned
>State:          open
>Class:          change-request
>Submitter-Id:   unknown
>Arrival-Date:   Thu Jul 17 16:48:01 UTC 2003
>Closed-Date:
>Last-Modified:
>Originator:     
>Release:        
>Organization:
>Environment:
>Description:
 Hello,
 
 Think back to the rationale for GDB simply flushing its entire state 
 after the user modifies a memory or register.   No matter how inefficent 
 that update is, it can't be any worse than the full refresh needed after 
 a single step.  All effort should be put into making single step fast, 
 and not into making read-modifywrite fast.
 
 I think I've just found a similar argument that can be used to justify 
 always enabling a data cache.  GDB's dcache is currently disabled (or at 
 least was the last time I looked :-).  The rationale was that the user, 
 when inspecting in-memory devices, would be confused if repeated reads 
 did not reflect the devices current register values.
 
 The problem with this is GUIs.
 
 A GUI can simultaneously display multiple views of the same memory 
 region.  Should each of those displays generate separate target reads 
 (with different values and side effects) or should they all share a 
 common cache?
 
 I think the later because it is impossible, from a GUI, to predict or 
 control the number of reads that request will trigger.  Hence I'm 
 thinking that a data cache should be enabled by default.
 
 The only proviso being that the the current cache and target vector 
 would need to be modified so that the cache only ever requested the data 
 needed, leaving it to the target to supply more if available (much like 
 registers do today).  The current dcache doesn't do this, it instead 
 pads out small reads :-(
 
 One thing that could be added to this is the idea of a sync point.
 When supplying data, the target could mark it as volatile.  Such 
 volatile data would then be drawn from the cache but only up until the 
 next sync point.  After that a fetch would trigger a new read. 
 Returning to the command line, for instance, could be a sync point. 
 Individual x/i commands on a volatile region would be separated by sync 
 points, and hence would trigger separate reads.
 
 Thoughts.  I think this provides at least one techical reason for 
 enabling the cache.
 
 enjoy,
 Andrew
 
 
>How-To-Repeat:
>Fix:
>Release-Note:
>Audit-Trail:
>Unformatted:


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]