This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 0/7] Support reading/writing memory on architectures with non 8-bits bytes


> I want GDB to be agnostic, as far as possible, to the size of 1 unit
> of memory.  Ideally, one unit will start as one unit in user-level
> commands, pass all the way down to the target level, which should know
> what one unit means.

I totally agree with you, and I believe that's the idea you'll find implemented
in the patches.  The length is always passed in "units of memory" of whatever you
are trying to read or write.  The only thing is that I called a "unit of memory"
a "byte", which seems the friction point.  If it's just a wording issue, it can
be changed easily.  I just don't know what succinct term to use.

> In the cases where that ideal is unreachable,
> there should be two conversions: once from user-level commands to
> bytes, and then one other time from bytes back to target-level units.

In the memory read/write call chains, the only times where we need to know the
size in octets of the memory unit is when gdb is actually handling the data read
or to be written.  We need to know how many bytes (as in octets) to malloc, memcpy
or hex-encode/decode.

>> Sorry about that, I should have just used "x p". The /10h part was not part of
>> my point. Following my previous point where the user would have needed to specify
>> the double of the address, it would have meant that asking to read at address p
>> would have given memory starting at address p/2.
> 
> No, the addresses don't need to change at all.  Why would they?

That's what I understood from you "and instead to change the way addresses are
interpreted by the target back-end", and it was further confirmed when you replied
"Something like that, yes" to the example I gave, in which the address is doubled.

>>>> Also, the gdb code in the context of these platforms becomes instantly more
>>>> hackish if you say that the address variable is not really the address we want
>>>> to read, but the double.
>>>
>>> I didn't say that, either.
>>
>> That's what I understood. If the backend needs to adjust the address by dividing it
>> by two, it means that the address parameter it received was the double of the actual
>> address...
> 
> No, see above.

Idem.

>>>> Another problem: the DWARF information describes the types using sizes in
>>>> target bytes (at least in our case, other implementations could do it
>>>> differently I suppose). The "char" type has a size of 1 (1 x 16-bits).
>>>
>>> That's fine, just don't call that a "byte".  Call it a "word".
>>
>> I actually started by using word throughout the code, but then I found it even
>> more ambiguous than byte. In the context of the x command, a word is defined as
>> four bytes, so it still clashes.
> 
> OK, "half-word", then.  (And please note that AFAIR there are
> architectures where a "byte" is 32 bit wide, so there "word" would be
> accurate.)

The term we are looking for is one for a single memory unit, regardless of its size.

>>>> So, when you "print myvar", gdb would have to know that it needs to convert
>>>> the size to octets to request the right amount of memory.
>>>
>>> No, it won't.  It sounds like my suggestion was totally misunderstood.
>>
>> Indeed, I think I missed your point. Re-reading the discussion doesn't help. Could
>> you clarify a bit how you envision things would work at various levels in gdb?
> 
> I tried to do that above, let me know if something is still unclear.
> 
>>> My problem with your solution is that you require the user to change
>>> her thinking about what a "byte" and "word" are.
>>
>> It doesn't change anything for all the existing users of GDB. A byte will continue
>> to be 8-bits for those platforms. So they don't need to change anything about how
>> they think.
> 
> I would like to find a solution where a byte continues to be 8 bits on
> _all_ platforms.

Ok, so if I understand correctly, you would be fine if the -data-read-memory-bytes
command accepted a length in number of memory units, as long as this unit is not
called a byte.  Is that right?  If so, it would confirm that it's a wording issue
more than a technical one.

>> I would assume that somebody developing for a system with 16-bits byte would be very
>> well aware of that fact. It is quite fundamental. They won't be shocked if the
>> debugger shows 16-bits when they asked to read 1 byte. Quite the opposite actually,
>> it will feel like a natural extension of the compiler.
> 
> What I have before my eyes is a user who debugs several different
> platforms, and therefore doesn't immerse themselves in this world of
> different meanings for too long times.

I understand your concern.  The term "byte" is probably set in stone as 8-bits for pretty
much everybody, so trying to redefine it as variable length would probably cause more harm
than good.

Thanks a lot for your patience.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]