This is the mail archive of the gdb@sourceware.cygnus.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: memory regions, dcache


"J.T. Conklin" wrote:
> 
> Setting aside symbol table performance for the moment, I'm back
> finishing up memory region attributes.
> 
> In previous messages, I've shared my changes to target_xfer_memory().
> Those have not changed significantly.  Since that time, I've added a
> 'mem_attrib *attrib' argument to the target vectors to_xfer_memory()
> function, and have fixed up all the code for all the targets.  While I
> was there, I noticed that some target's *_xfer_memory() function did
> not declare the 'struct target_ops *target' argument, so I fixed that
> up at the same time.  (Btw, Are there any *_xfer_memory() functions
> that use the 'target' argument?  I didn't see any while I was adding
> the new argument, but I wasn't looking for it either.  If nothing uses
> it, I wouldn't mind removing it as part of this change.)

I think, long term, the target will need to be a parameter to every
target method.

If GDB ever does find its self with two active target vectors then this
will become an issue.
However, in the mean time ....

> Now that the mem_attrib argument is available, target *_xfer_memory()
> functions may be able to tailor their behavior accordingly.  I say
> 'may be able' because currently most of the functions simply ignore
> the new arguement (either because the function has not been changed to
> use the attribute, or perhaps because the host/target interface/
> protocol cannot support the attribute(s) selected.  Is there anything
> I should do about this now, or is the fact that some attributes are
> not supported on all targets a given that just needs to be documented?

If a user sets (pick a random attribute) the atomic attribute (see next
paragraph) but the target doesn't support it shouldn't the user be told
that the attribute was ignored?  I think it is important to not mis-lead
the user with this.

(By atomic I mean read/write a word as a word and not as a sequence of
bytes - assuming that is a possible attribute.  On some targets a word
read (arm?) returns a different value to a byte read).

> In most of the embedded remote targets, the target vector code calls
> dcache_init() in the *_open() function to register functions used to
> read and write memory, and the *_xfer_memory() functions calls
> dcache_xfer_memory() which uses the functions registered earlier to
> perform the I/O.  The problem is that there is no way to pass the
> attribute argument through dcache to the functions.
> 
> This should be fairly easy to address --- just a lot of grunt work
> fixing up dcache_xfer_memory() and the read and write functions to
> take the new argument.  However, I wonder whether it would be better
> (cleaner, more "elegant", etc.) to move dcache up a layer and put it
> between target_xfer_memory() and the target *_xfer_memory() functions.
> 
> For example, instead of calling a target vector's *to_xfer_memory(),
> target_xfer_memory() would call dcache_xfer_memory() with a target
> vector pointer.  If dcache_xfer_memory() had to do i/o, it would call
> the target vector's *to_xfer_memory() function.

I think the elegant solution is preferable (but as you ask, is it
feasible?).

> Having the dcache at a higher level might be useful for implementing
> the verify (read and compare after writes) attribute.  As I imagine
> things, verify support would be implemented in target_xfer_memory()
> and would re-read and compare after a write.  But if the target code
> uses dcache, the read would not come from the target but the cache.
> On the other hand, target_xfer_memory() could do a cache invalidate
> before reading (and a more refined cache invalidate could easily be
> written).
> 
> All that being said, I think that going the simple route and adding
> the parameter to dcache_xfer_memory() and to the target read/write
> functions may be the right thing for now.  It won't be difficult to
> re-evaluate this decision and go for the more integrated approch if
> that turns out to be the right thing over time.

Everytime something in the target makes a call back into the guts of
GDB, the problem of implementing things like gdbserver become just that
bit harder.  The less things the target tries to do the better.

With this in mind, have you considered a more incremental approach:

	o	add a new memory read/write method
		that has the interface you desire.

	o	have the new dcache use that interface

	o	provide a simple default implementation
		of the new method that uses the old
		interface.

This should give you the freedom to ``get it working'' using the main
repository and then people can fix their target  (Or I'll get Stan to
obsolete it :-).

It would also make dcache available to native targets.

The comment:

/* NOTE: cagney/1999-10-18: This function (and its siblings in other
   remote targets) shouldn't attempt to read the entire buffer.
   Instead it should read a single packet worth of data and then
   return the byte size of that packet to the caller.  The caller (its
   caller and its callers caller ;-) already contains code for
   handling partial reads. */

is relevant here.

[There is another issue lurking here, rather then raise it I'll start a
new thread]

> One more thing before this gets too long.  There is a remotecache
> variable that globally enables the cache which is currently disabled.
> With attributes, each memory region has a cache/nocache atttribute.
> Should the remotecache variable override the attribute.  Or can we
> depricate remotecache completely?  Since when no memory attributes
> are defined the default attribute is used which disables the cache,
> there isn't much need for a global flag any more.

You mean delete the command :-)

FYI, things have changed a little - its now the static variable
dcache.c:dcache_enabled_p and wince.c sets it using
dcache.c:set_dcache_state(1).

I think the command (but not the global variable) would need to be
around for at least one more release.
Can it be converted into a recursive call to the CLI with a command that
enables/disables all memory regions?  However, is there any evidence
that people actually use it?

	Andrew

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]