This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 0/7] Support reading/writing memory on architectures with non 8-bits bytes


> Date: Thu, 9 Apr 2015 17:00:45 -0400
> From: Simon Marchi <simon.marchi@ericsson.com>
> CC: <gdb-patches@sourceware.org>
> 
> This was my train of thoughts:
> 
> - The gdb core (the target interface I suppose?) would use octet indexing
>   and octet size, which is compensated by the backend. I understand that
>   we are clear on that, given your "Something like that, yes".
> - The functions handling the commands (the application level?) should be
>   agnostic about the byte size, meaning they won't do any adjustment for
>   that.
> - Therefore, if we take mi_cmd_data_read_memory_bytes as an example, it
>   would mean that the front-end would have to pass the double of the
>   address and the size to get the desired result.
> 
> If you want the gdb core to keep using addressing and size in octets, the
> conversion needs to be done somewhere, either in the head of the user, in
> the front-end or in the command handling function, passing control to a
> gdb core function.

I want GDB to be agnostic, as far as possible, to the size of 1 unit
of memory.  Ideally, one unit will start as one unit in user-level
commands, pass all the way down to the target level, which should know
what one unit means.  In the cases where that ideal is unreachable,
there should be two conversions: once from user-level commands to
bytes, and then one other time from bytes back to target-level units.

> Sorry about that, I should have just used "x p". The /10h part was not part of
> my point. Following my previous point where the user would have needed to specify
> the double of the address, it would have meant that asking to read at address p
> would have given memory starting at address p/2.

No, the addresses don't need to change at all.  Why would they?

> >> Also, the gdb code in the context of these platforms becomes instantly more
> >> hackish if you say that the address variable is not really the address we want
> >> to read, but the double.
> > 
> > I didn't say that, either.
> 
> That's what I understood. If the backend needs to adjust the address by dividing it
> by two, it means that the address parameter it received was the double of the actual
> address...

No, see above.

> >> Another problem: the DWARF information describes the types using sizes in
> >> target bytes (at least in our case, other implementations could do it
> >> differently I suppose). The "char" type has a size of 1 (1 x 16-bits).
> > 
> > That's fine, just don't call that a "byte".  Call it a "word".
> 
> I actually started by using word throughout the code, but then I found it even
> more ambiguous than byte. In the context of the x command, a word is defined as
> four bytes, so it still clashes.

OK, "half-word", then.  (And please note that AFAIR there are
architectures where a "byte" is 32 bit wide, so there "word" would be
accurate.)

> >> So, when you "print myvar", gdb would have to know that it needs to convert
> >> the size to octets to request the right amount of memory.
> > 
> > No, it won't.  It sounds like my suggestion was totally misunderstood.
> 
> Indeed, I think I missed your point. Re-reading the discussion doesn't help. Could
> you clarify a bit how you envision things would work at various levels in gdb?

I tried to do that above, let me know if something is still unclear.

> > My problem with your solution is that you require the user to change
> > her thinking about what a "byte" and "word" are.
> 
> It doesn't change anything for all the existing users of GDB. A byte will continue
> to be 8-bits for those platforms. So they don't need to change anything about how
> they think.

I would like to find a solution where a byte continues to be 8 bits on
_all_ platforms.

> I would assume that somebody developing for a system with 16-bits byte would be very
> well aware of that fact. It is quite fundamental. They won't be shocked if the
> debugger shows 16-bits when they asked to read 1 byte. Quite the opposite actually,
> it will feel like a natural extension of the compiler.

What I have before my eyes is a user who debugs several different
platforms, and therefore doesn't immerse themselves in this world of
different meanings for too long times.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]