This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC] TARGET_CHAR_BIT != HOST_CHAR_BIT


When gdb is about to download large amounts of data over the a remote interface, it will break it up into smaller packets. These packets (the 'M' packets) hold the destination address as its first argument. The download of the first 'M' packets goes well, but the successive M's within that segment fails. GDB assumes that when it has downloaded n bytes, it should increase the lma address by n for the next packet.

The problem is that the tic4x target doesnt work this way. It has the following proerty: sizeof(char)=sizeof(short)=sizeof(int)=sizeof(long)=1 *and* is able to hold 32-bits of information. The tic4x target has absolutely no conception about bytes, only a databus of 32-bit width. One increase in a datapointer increases the physical address by one, but still one address spans 32-bit. Thus to store the information for a particular address, you need 32-bits of storeage. e.g.

char foo[2] = { 1, 2 };

Is located in memory like this:

0x1000: 0x00000001
0x1001: 0x00000002

There are two things at play here:


- the compilers decision on how to implement char

The original alpha, for instance, had 8 bit addressable pointers yet the hardware could only read/write 64 bit words. Access to anything smaller than 64 bits was handled in software. Having the tic4x do something similar (presumably with long pointers) is just a ``small matter of programming''.

- physical limitations of the hardware

This is the important one. The data space pointers for this hardware identify 32 bit words, not 8 bit bytes.

So you see, if a segment contains 256 bytes, GDB still needs to download 256 bytes to the target (that's obvious), but the address-span of those 256 bytes is only 64 (on target). So any lma address increases must be divided by 4 to be correct on this target.

As for the d10v solution, the tic4x is similar to the code-space of this target. You could implement gdb this way, but I think you'll soon wind up in the same troubles: A char is still 32-bit, not the hardcoded 8-bit. All accesses to non-32-bit boundary addresses will be invalid. Absolutely all addresses coming from binutils/BFD must be ajusted, because they are 32-bit oriented, not byte-oriented...

I think this needs to be persued a bit more before being discarded.


I suspect that what's been proposed here would [further] overload the already overloaded TARGET_CHAR_BIT. Is something separate needed?

No and yes. Yes, because TARGET_CHAR_BIT doesn't affect the packet download lma incrementing. And no, because there already exists a set_gdbarch_char_bit() setting. But its commented out, so its not in use. This function/setting is probably what we would need for this port, if we could define it this way: TARGET_CHAR_BIT means "the number of bits required to represent the information stored in one unique address".

To expand my point. TARGET_CHAR_BIT is used to identify:


- bitsizeof (char)
- the implied address alignment
- anything else such as debug info?

Those two are, as I noted above, orthogonal. The problem, I think, is that GDB has used them interchangably.

To address this I can see two models.

- assume an 8 bit host byte size (aka bfd_byte)

This is effectively what GDB does now. It, via pointer_to_address, maps a target pointer onto a cannonical CORE_ADDR. For your architecture, a read of the word pointed at by 0x1000 would be converted into a read of four 8 bit bytes bytes at 0x4000.

- use the target byte size

And have any memory manipulations try to remember which (host or target) is used for any length computations.

I have a feeling that the first will be much easier. All, in theory, that is needed is for this target to implement a pointer_to_address that does the above manipulation (and then stop GDB trying to use TARGET_CHAR_BIT when moving memory around).

I've also got reservations over making the semantics of memory transfer operations architecture dependant. I think memory transfers should be defined in an architecture independant way.

Anyway, can you try setting up pointer_to_address and see what happens.

The problem with this approach is that GDB's CORE_ADDRs become visible to the user vis:

(gdb) print/x $pc
$7 = 0x10140b8

That's the PC as a GDB CORE_ADDR.


(gdb) print/x (int)$pc
$8 = 0x502e

Where as that's the actual pointer value.


(gdb) x/i $pc
0x10140b8 <main+20>:    ld      r0, @r11        ||      nop
(gdb) x/4b $pc
0x10140b8 <main+20>:    0x30    0x0b    0x5e    0x00

In both cases an examine works as expected. Note that x/b examines an 8 bit byte and not a 16 bit instruction word.


(gdb) x/4b (@code *)0x502e
A syntax error in expression, near `*)0x502e'.

Hmm, it would be better if that worked. Would save the need to do:


(gdb) x/4b (@code void *)0x502e
0x10140b8 <main+20>:    0x30    0x0b    0x5e    0x00

But note that this code pointer is very different to:


(gdb) x/4b (char *)0x502e
0x200502e:      0x00    0x00    0x00    0x00

which created a pointer into the data space.


I should note that having CORE_ADDR visible is a blessing in disguise. It makes operations such as x/b meaningful.

Andrew




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]